CN114228617A - Image generation method, device, equipment, storage medium and vehicle - Google Patents

Image generation method, device, equipment, storage medium and vehicle Download PDF

Info

Publication number
CN114228617A
CN114228617A CN202111622350.8A CN202111622350A CN114228617A CN 114228617 A CN114228617 A CN 114228617A CN 202111622350 A CN202111622350 A CN 202111622350A CN 114228617 A CN114228617 A CN 114228617A
Authority
CN
China
Prior art keywords
image
vehicle
target object
surrounding environment
blind area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111622350.8A
Other languages
Chinese (zh)
Inventor
徐立华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202111622350.8A priority Critical patent/CN114228617A/en
Publication of CN114228617A publication Critical patent/CN114228617A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/304Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides an image generation method, an image generation device, image generation equipment, a storage medium and a vehicle, relates to the field of vehicle networking, in particular to a vehicle road cooperation technology, and can be applied to scenes of generating and displaying live-action pictures of intelligent vehicles. One embodiment of the method comprises: shooting the surrounding environment to obtain a first image; receiving an image of a surrounding blind area as a second image, wherein the blind area is formed by occlusion of an object within a photographing range; and generating a live-action picture of the surrounding environment according to the first image and the second image.

Description

Image generation method, device, equipment, storage medium and vehicle
Technical Field
The embodiment of the disclosure relates to the field of vehicle networking, in particular to a vehicle road cooperation technology, which can be applied to scenes of generating and displaying live-action pictures of intelligent vehicles.
Background
With the rapid development of automotive intelligence, many vehicles are equipped with various functions to serve the vehicle travel and to enhance the passenger or driving experience. Currently, more and more vehicles are equipped with a live-action look-around function to represent the live-action environment around the vehicle with live-action maps.
Generally, the live-action all-round function generally uses cameras (e.g., cameras in four directions, i.e., front, back, left, and right) provided in a vehicle to capture images of the surrounding environment, and then performs operations such as all-round stitching and rendering on each image to obtain a live-action image of the surrounding environment. However, in the actual application process, a large object (such as other vehicles around) in the surrounding environment may block a certain area, so that the real scene of the area cannot be captured.
Disclosure of Invention
The embodiment of the disclosure provides an image generation method, an image generation device, equipment, a storage medium and a vehicle.
In a first aspect, an embodiment of the present disclosure provides an image generation method, including: shooting the surrounding environment to obtain a first image; receiving an image of a surrounding blind area as a second image, wherein the blind area is formed by occlusion of an object within a photographing range; and generating a live-action picture of the surrounding environment according to the first image and the second image.
In a second aspect, an embodiment of the present disclosure provides an image generation apparatus, including: the shooting module is configured to shoot the surrounding environment to obtain a first image; a receiving module configured to receive an image of a surrounding blind area as a second image, wherein the blind area is formed by occlusion of an object within a photographing range; a generating module configured to generate a live-action view of the surrounding environment from the first image and the second image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: an image acquisition device and at least one processor; and a memory communicatively coupled to the at least one processor; wherein the image capturing device is configured to capture an image, and the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method as described in any implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a vehicle including the electronic device described in the third aspect.
In a fifth aspect, the disclosed embodiments propose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method as described in any one of the implementations of the first aspect.
In a sixth aspect, the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the image generation method, the image generation device, the image generation equipment, the storage medium and the vehicle, the image of the blind area in the shooting range of the vehicle is acquired through information interaction between the vehicle and various objects in the surrounding environment of the vehicle, and then the live-action image with a wider view field is generated according to the image shot by the vehicle and the image of the blind area acquired through information sharing, so that the vehicle can know more information of the surrounding environment, user experience is improved, and driving safety can be further improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of an image generation method of the present disclosure;
FIG. 3 is a schematic illustration of blind spot formation;
FIGS. 4a-4b are schematic diagrams of a prior art live-action diagram;
FIG. 5 is a schematic diagram of an application scenario of the image generation method of the embodiments of the present disclosure;
FIG. 6 is a schematic structural diagram of one embodiment of an image generation apparatus of the present disclosure;
FIG. 7 is a schematic structural diagram of one embodiment of a vehicle according to the present disclosure;
FIG. 8 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the image generation method or the graphics generation apparatus of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a vehicle 101. The vehicle 101 may be various types of vehicles. Such as a human driven vehicle, an unmanned vehicle, a robotic vehicle, and the like. The vehicle 101 may include a live-action processing device. Specifically, the live-action image processing device may include an image capturing device (such as a camera or the like) for capturing images of the surrounding environment, and may further include an information displaying device (such as various screen panels or the like) for displaying various information (such as images captured by the image capturing device).
Meanwhile, the vehicle 101 may also interact with various objects in the surrounding environment, such as information interaction with other vehicles in the surroundings. At this time, the vehicle 101 may receive information transmitted by other objects (such as images captured by other objects, etc.). At this time, the live-action image processing device may further include an image processing device (e.g., a processor, a controller, etc.) to process information such as an image, and a processing result (e.g., a generated live-action image of the surrounding environment, etc.) may be displayed by the information display device (see reference numeral 102 in the figure).
In some cases, the vehicle 101 may communicate with a server (not shown) to receive or transmit information. For example, the server may collect various travel data of the vehicle 101, and transmit a travel reference policy or the like to the vehicle 101. The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the image generation method provided in the embodiment of the present disclosure is generally executed by a real image processing device in the vehicle 101, and accordingly, the image generation device is generally provided in the real image processing device in the vehicle 101.
It should be understood that the number of vehicles in FIG. 1 is merely illustrative. There may be any number of vehicles, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of an image generation method according to the present disclosure is shown. The image generation method includes the steps of:
step 201, shooting the surrounding environment to obtain a first image.
In the present embodiment, an execution subject of the image generation apparatus (such as the vehicle 101 shown in fig. 1) may capture its surroundings with various image capture apparatuses (such as cameras) mounted thereon, and take the captured image as a first image. The ambient environment may refer to the environment of the area where the subject is executing.
The positions and the number of the image acquisition devices installed on the execution main body can be flexibly set according to actual application requirements or application scenes. For example, cameras may be installed in four directions, front, rear, left, and right, of the execution main body, respectively. When the number of the image acquisition devices is multiple, the first image comprises images respectively shot by the image acquisition devices.
In step 202, an image of the surrounding blind area is received as a second image.
In the present embodiment, the execution subject's peripheral blind area may refer to a blind area formed by occlusion of an object within its shooting range. The object within the photographing range may refer to various objects that can be photographed. For example, objects include, but are not limited to: other vehicles, obstacles, and the like.
For example, if the shooting range of the camera on the vehicle "a" includes the vehicle "B", the area that cannot be detected by the camera on the vehicle "a" due to the shielding of the vehicle "B" is the blind area corresponding to the vehicle "a".
Specifically, and by way of example, FIG. 3 illustrates a schematic diagram 300 of blind spot formation. As shown in fig. 3, the vehicle "a" is provided with four cameras "a 1", "a 2", "A3", and "a 4" at the front, rear, left, and right of the vehicle body, respectively, to photograph the surroundings of the vehicle "a". The vehicle "B" is provided with four cameras "B1", "B2", "B3", and "B4" at the front, rear, left, and right of the vehicle body, respectively, to photograph the surroundings of the vehicle "B". For the vehicle "a", a camera thereof cannot detect a partial region on the right side of the vehicle "B" due to the shielding of the vehicle "B" on the right side thereof, thereby forming a blind area.
Correspondingly, the camera of the corresponding vehicle "B" cannot detect the partial region on the left side of the vehicle "a" due to the shielding of the vehicle "a" on the left side of the corresponding vehicle "B", so that a blind zone (not shown in the figure) is formed.
The image of the blind area may refer to an image that presents a blind area environment. Specifically, the executing subject may receive images of its surrounding blind areas from other various subjects, and take the received images as the second image. The execution main body and other main bodies of the image of the sending blind area can carry out information interaction by adopting various communication modes, and can be flexibly set according to the actual application scene.
For example, the executing agent may receive the second image from a server to which it is communicatively connected. For another example, the executing subject may receive a second image from an object in its surroundings (e.g., a monitoring camera disposed on a road, etc.).
Step 203, generating a live-action image of the surrounding environment according to the first image and the second image.
In the present embodiment, the live view may refer to an image representing a real scene. A live-action of the surrounding environment may refer to an image that represents the real environment of the surrounding. In some cases, the live-action map may be presented in the form of a live-action map.
Specifically, the executing subject may generate the live-action image of the surrounding environment from the first image and the second image by using various existing live-action image generation methods. For example, the surrounding environment may be three-dimensionally reconstructed from the first image and the second image, and a live-action image of the surrounding environment may be formed by a three-dimensional rendering method or the like.
With continued reference to fig. 4a and 4b, fig. 4a and 4b show schematic views of an existing live-action. In the prior art, a camera of a vehicle captures an image of a surrounding environment and generates a live-action image according to the image, but in a scene with many or relatively complex obstacles (such as a parking lot), various obstacles (such as other vehicles) may exist around the vehicle, which seriously affects the content that the captured image of the surrounding environment can present, for example, the live-action image shown in fig. 4a and 4b, when objects such as other vehicles exist around the vehicle, the camera of the vehicle cannot capture the environment blocked by other vehicles around the vehicle, and the field of view of the captured image is limited.
According to the embodiment, the images of the surrounding blind areas are received, so that the vehicle can utilize the images of the surrounding environment shot by the camera of the vehicle, the real scene graph is generated by combining the images of the blind areas, the area environment shielded by other objects such as surrounding vehicles can be known through the real scene graph, and a wider view field is presented.
In some optional implementations of the present embodiment, the executing subject may receive a second image from the target object. The target object may be an object causing a blind area. Taking fig. 3 as an example, the vehicle "a" may receive an image of a blind area due to the vehicle "B" from the vehicle "B" causing its blind area. Correspondingly, for vehicle "B", vehicle "B" may receive an image of a blind area due to vehicle "a" from vehicle "a".
Because the object causing the blind area is usually near the execution main body, the second image can be obtained by directly performing information interaction with the object causing the blind area nearby through various short-distance communication technologies, the situations of network blockage or communication abnormity and the like which may occur in communication with a server and the like are avoided, and the stability of generating the live-action image is ensured.
Specifically, the second image may be received from the target object in various ways. For example, each object in the environment (such as a vehicle, a road condition monitoring camera, etc.) can use its own camera to capture images of the surrounding environment and share the images. At this time, for each vehicle, images shared by all objects around can be checked, so that an image of a blind area shared by the objects causing the blind area can be screened out.
Alternatively, the second image may be received from the target object by:
step one, responding to the detection of the target object, and sending an image sharing request to the target object.
In this step, the peripheral object may be detected by a camera, or by a short-range communication technique or the like. Upon detection of a target object causing a blind area, an image sharing request may be further sent to the target object to request the target object to send an image of the blind area it caused.
And step two, receiving a second image sent by the target object based on the image sharing request.
In this step, the target object may return the second image to the subject who sent the image sharing request after receiving the image sharing request. The target object may acquire the second image in various ways. For example, the image sharing request may include position information of a subject that sends the image sharing request, and the target object may determine a blind area formed by the target object by blocking itself according to the position information and the position information of the target object, and may further acquire an image of the blind area as the second image from the server or from a map stored locally.
By detecting the objects forming the blind area and sharing the interaction request, unnecessary information interaction can be avoided, thereby saving communication resources and being beneficial to reducing the cost of generating the live-action picture.
Alternatively, the second image may be obtained by photographing the surroundings of the target object. In this case, the target object may directly take an image of the surrounding environment (e.g., taken by an image capturing device such as its own camera) as the second image, or may process the image of the surrounding environment (e.g., cut out only a blind area portion), and take the processed image as the second image.
In addition, the target object may also send various information related to the blind area (such as an obstacle identified by a neural network model or the like) to the execution subject, so that the execution subject may more fully know the real scene of the area blocked by the target object.
Because the objects causing the blind area in the surrounding environment also have the image acquisition function under many conditions, the images obtained by shooting the surrounding environment by the objects can be directly used as the second images, and the quality of the images shot by the objects causing the blind area is higher, thereby being beneficial to improving the acquisition efficiency and the image quality of the second images.
Alternatively, the target object may be a vehicle in the surrounding environment. Because the surrounding environment is usually shot to assist the driving of the vehicle in the driving process of the vehicle at present, the utilization rate of the image shot by the vehicle to the surrounding environment is improved by sharing the image among the vehicles.
In some optional implementation manners of this embodiment, after the real-scene graph of the surrounding environment is generated, the real-scene graph may be further displayed, so that a driver or a control system of the execution subject and the like can more intuitively and clearly understand the surrounding environment according to the real-scene graph, and further, a driving strategy may be reasonably arranged based on the real-scene graph, and driving safety is ensured.
Optionally, when the live-action picture is displayed, a preset display mode may be adopted to display the live-action picture. The specific display mode can be flexibly set according to the actual application scene, so that the display effect meets the actual requirement. For example, the presentation manner may include a presentation time length, a presentation position, a presentation effect (e.g., a scroll presentation, etc.).
For another example, the live-action image may be displayed with a preset transparency. The transparency may refer to the overall transparency of the real scene image, or different transparencies may be set for different parts of the real scene image, and may be flexibly set according to actual application requirements.
Through the flexible configuration of the display mode of the live-action picture, the display of the live-action picture can be further made to be in accordance with the actual requirement, and therefore the utilization effect of the live-action picture is improved.
With continued reference to fig. 5, fig. 5 shows an exemplary application scenario 500 of the image generation method according to the present embodiment. In the application scenario of fig. 5, there is a vehicle "B" 502 on the front side of vehicle "a" 501 and a vehicle "C" 503 on the right side. The vehicle "a" may transmit an image sharing request to the vehicle "B" 502 and the vehicle "C" 503 to receive the first blind area image it transmits from the vehicle "B" 502, the image corresponding to the blind area 5021 at the rear side of the vehicle "B", and the second blind area image it transmits from the vehicle "C", the image corresponding to the blind area 5031 at the right side of the vehicle "C", while capturing the surrounding environment with its own camera to obtain captured images. Then, the vehicle "a" may generate the live-action view 504 of the surrounding environment by fusing the captured image and the received first and second blind area images, and may further display the generated live-action view 504 to assist the vehicle "a" in traveling.
For example, in the environment of a parking lot, vehicle "a" can understand the details of the rear side of vehicle "B" 502 and the right side area of vehicle "C" 503 from the real scene diagram, thereby arranging the parking position appropriately. For another example, in a driving environment on a road, the vehicle "a" can know the current indication state of a traffic light shielded by the vehicle "B" 502 or the vehicle "C" 503 according to a real scene graph, and then can reasonably set a driving strategy to ensure the driving safety on the road.
According to the image generation method provided by the embodiment of the disclosure, the image of the blind area in the shooting range of the vehicle can be acquired through information interaction between the vehicle and various objects in the surrounding environment, and then the live-action image with a wider view field can be generated according to the image shot by the vehicle and the image of the blind area acquired through information sharing, so that the vehicle can know more information of the surrounding environment, more reference information can be provided for setting of a vehicle driving strategy, the user experience is facilitated to be improved, and the driving safety can be further improved.
With further reference to fig. 6, as an implementation of the method shown in fig. 2 described above, the present disclosure provides an embodiment of an image generation apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which may be applied in various electronic devices in particular.
As shown in fig. 6, the image generating apparatus 600 provided in the present embodiment includes a shooting module 601, a receiving module 602, and a generating module 603. The shooting module 601 is configured to shoot the surrounding environment, so as to obtain a first image; the receiving module 602 is configured to receive an image of a surrounding blind area as a second image, wherein the blind area is formed by occlusion of an object within a shooting range; the generating module 603 is configured to generate a live-action view of the surrounding environment from the first image and the second image.
In the present embodiment, in the image generation apparatus 600: the detailed processing and the technical effects of the shooting module 601, the receiving module 602, and the generating module 603 can refer to the related descriptions of step 201 and step 203 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the receiving module 602 is further configured to: a second image is received from a target object, wherein the target object is an object forming a blind spot.
In some optional implementations of this embodiment, the receiving module 602 is further configured to: in response to detecting the target object, sending an image sharing request to the target object; and receiving a second image sent by the target object based on the image sharing request.
In some optional implementations of the embodiment, the second image is obtained by shooting the surrounding environment of the target object by the target object.
In some optional implementations of the present embodiment, the target object is a vehicle in a surrounding environment.
In some optional implementations of the present embodiment, the image generating apparatus 600 further includes: and a display module (not shown) configured to display the live-action image with a predetermined transparency.
FIG. 7 illustrates a structural schematic 700 of one embodiment of a vehicle according to the present disclosure. As shown in fig. 7, the vehicle 700 may include a body 701, a chassis 702, a running gear 703, a coupler buffer 704, a brake 705, an image capture device 706, an image processing device 707, and an information presentation device 708.
The body 701, the chassis 702, the running gear 703, the coupler draft gear 704 and the brake 705 are basic components of the vehicle structure. The image capturing device 706 may be various cameras or the like for capturing images. For example, an image of the surroundings of the vehicle may be acquired. The image processing apparatus 707 may perform various image processing to obtain an image processing result. For example, an image of a blind area corresponding to the vehicle may be received, and then the image captured by the image capture device 706 and the received image may be processed to generate a live-action map of the surrounding environment. The information presentation device 708 may present various information. For example, an image processing result (such as a live view) output by the image processing apparatus 707 may be presented.
It should be noted that the above is only a schematic view of the vehicle composition. The vehicle may also include other various devices to provide different services or functions, depending on different needs. For example, the vehicle may further include a storage device for storing various information. For another example, the vehicle may further include a multimedia playing device for playing audio and video data.
FIG. 8 shows a schematic block diagram of an example electronic device 800 (such as the image processing apparatus shown in FIG. 7, etc.) that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of processing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, an image pickup device, and the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the image generation method. For example, in some embodiments, the image generation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the image generation method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the image generation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in this disclosure may be performed in parallel or sequentially or in a different order, as long as the desired results of the technical solutions provided by this disclosure can be achieved, and are not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (16)

1. An image generation method, comprising:
shooting the surrounding environment to obtain a first image;
receiving an image of a surrounding blind area as a second image, wherein the blind area is formed by occlusion of an object within a photographing range;
and generating a live-action picture of the surrounding environment according to the first image and the second image.
2. The method of claim 1, wherein the receiving an image of the blind surrounding area as a second image comprises:
receiving the second image from a target object, wherein the target object is an object forming the blind area.
3. The method of claim 2, wherein the receiving the second image from a target object comprises:
in response to detecting the target object, sending an image sharing request to the target object;
receiving the second image sent by the target object based on the image sharing request.
4. The method of claim 3, wherein the second image is taken of the target object's surroundings.
5. The method of claim 4, wherein the target object is a vehicle in the surrounding environment.
6. The method according to one of claims 1-5, wherein the method further comprises:
and displaying the live-action picture with preset transparency.
7. An image generation apparatus comprising:
the shooting module is configured to shoot the surrounding environment to obtain a first image;
a receiving module configured to receive an image of a surrounding blind area as a second image, wherein the blind area is formed by occlusion of an object within a photographing range;
a generating module configured to generate a live-action view of the surrounding environment from the first image and the second image.
8. The apparatus of claim 7, wherein the receiving module is further configured to:
receiving the second image from a target object, wherein the target object is an object forming the blind area.
9. The apparatus of claim 8, wherein the receiving module is further configured to:
in response to detecting the target object, sending an image sharing request to the target object;
receiving the second image sent by the target object based on the image sharing request.
10. The apparatus of claim 9, wherein the second image is taken of the target object's surroundings.
11. The apparatus of claim 10, wherein the target object is a vehicle in the surrounding environment.
12. The apparatus according to one of claims 7-11, wherein the apparatus further comprises:
a display module configured to display the live-action picture with a preset transparency.
13. An electronic device, comprising:
an image acquisition device and at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the image capture device is for capturing an image, and the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A vehicle comprising the electronic device of claim 13.
15. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
16. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
CN202111622350.8A 2021-12-28 2021-12-28 Image generation method, device, equipment, storage medium and vehicle Pending CN114228617A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111622350.8A CN114228617A (en) 2021-12-28 2021-12-28 Image generation method, device, equipment, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111622350.8A CN114228617A (en) 2021-12-28 2021-12-28 Image generation method, device, equipment, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN114228617A true CN114228617A (en) 2022-03-25

Family

ID=80763826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111622350.8A Pending CN114228617A (en) 2021-12-28 2021-12-28 Image generation method, device, equipment, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN114228617A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934076A (en) * 2017-12-19 2019-06-25 广州汽车集团股份有限公司 Generation method, device, system and the terminal device of the scene image of vision dead zone
CN111191607A (en) * 2019-12-31 2020-05-22 上海眼控科技股份有限公司 Method, apparatus, and storage medium for determining steering information of vehicle
CN111582080A (en) * 2020-04-24 2020-08-25 杭州鸿泉物联网技术股份有限公司 Method and device for realizing 360-degree all-round monitoring of vehicle
CN112004051A (en) * 2019-05-27 2020-11-27 奥迪股份公司 Image display system for a vehicle, corresponding method and storage medium
CN112124201A (en) * 2020-10-10 2020-12-25 深圳道可视科技有限公司 Panoramic parking system with visible picture blind areas and method thereof
CN113139897A (en) * 2020-01-16 2021-07-20 现代摩比斯株式会社 Panoramic view synthesis system and method
CN113272177A (en) * 2018-11-15 2021-08-17 法雷奥开关和传感器有限责任公司 Method, computer program product, mobile communication device and communication system for providing visual information about at least a part of an environment
CN113442831A (en) * 2020-03-25 2021-09-28 斑马智行网络(香港)有限公司 Visual field blind area display method and device and navigation system adopting method
CN113614779A (en) * 2019-03-19 2021-11-05 捷豹路虎有限公司 Image processing system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934076A (en) * 2017-12-19 2019-06-25 广州汽车集团股份有限公司 Generation method, device, system and the terminal device of the scene image of vision dead zone
CN113272177A (en) * 2018-11-15 2021-08-17 法雷奥开关和传感器有限责任公司 Method, computer program product, mobile communication device and communication system for providing visual information about at least a part of an environment
CN113614779A (en) * 2019-03-19 2021-11-05 捷豹路虎有限公司 Image processing system and method
CN112004051A (en) * 2019-05-27 2020-11-27 奥迪股份公司 Image display system for a vehicle, corresponding method and storage medium
CN111191607A (en) * 2019-12-31 2020-05-22 上海眼控科技股份有限公司 Method, apparatus, and storage medium for determining steering information of vehicle
CN113139897A (en) * 2020-01-16 2021-07-20 现代摩比斯株式会社 Panoramic view synthesis system and method
CN113442831A (en) * 2020-03-25 2021-09-28 斑马智行网络(香港)有限公司 Visual field blind area display method and device and navigation system adopting method
CN111582080A (en) * 2020-04-24 2020-08-25 杭州鸿泉物联网技术股份有限公司 Method and device for realizing 360-degree all-round monitoring of vehicle
CN112124201A (en) * 2020-10-10 2020-12-25 深圳道可视科技有限公司 Panoramic parking system with visible picture blind areas and method thereof

Similar Documents

Publication Publication Date Title
US20190092345A1 (en) Driving method, vehicle-mounted driving control terminal, remote driving terminal, and storage medium
CN112650247A (en) Remote control method, cockpit, cloud server and automatic driving vehicle
US10994749B2 (en) Vehicle control method, related device, and computer storage medium
CN113276774B (en) Method, device and equipment for processing video picture in unmanned vehicle remote driving process
CN111291650A (en) Automatic parking assistance method and device
US11222409B2 (en) Image/video deblurring using convolutional neural networks with applications to SFM/SLAM with blurred images/videos
CN111736604A (en) Remote driving control method, device, equipment and storage medium
CN106534780A (en) Three-dimensional panoramic video monitoring device and video image processing method thereof
US20150116494A1 (en) Overhead view image display device
CN107798860A (en) Vehicle-mounted real-time imaging share system, method, equipment and storage medium
CN109118532A (en) Vision depth of field estimation method, device, equipment and storage medium
JP2023530545A (en) Spatial geometric information estimation model generation method and apparatus
CN112261340A (en) Visual field sharing method and device, electronic equipment and readable storage medium
CN115817463A (en) Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium
CN114298908A (en) Obstacle display method and device, electronic equipment and storage medium
KR102030904B1 (en) Method for sharing road situation and computer program recorded on record-medium for executing method therefor
JP7186749B2 (en) Management system, management method, management device, program and communication terminal
WO2024041198A1 (en) Intelligent driving method, apparatus and system, and domain controller, medium and vehicle
KR102095454B1 (en) Cloud server for connected-car and method for simulating situation
CN114228617A (en) Image generation method, device, equipment, storage medium and vehicle
CN116012609A (en) Multi-target tracking method, device, electronic equipment and medium for looking around fish eyes
CN113507559A (en) Intelligent camera shooting method and system applied to vehicle and vehicle
KR20190005364A (en) Audio video navigation apparatus and vehicle video monitoring system and method for utilizing user interface of the audio video navigation apparatus
CN115134496B (en) Intelligent driving control method, system, vehicle, electronic equipment and storage medium
TWI817578B (en) Assistance method for safety driving, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination