CN112929581A - Method, device and storage medium for processing photos or videos containing vehicles - Google Patents

Method, device and storage medium for processing photos or videos containing vehicles Download PDF

Info

Publication number
CN112929581A
CN112929581A CN202110086883.2A CN202110086883A CN112929581A CN 112929581 A CN112929581 A CN 112929581A CN 202110086883 A CN202110086883 A CN 202110086883A CN 112929581 A CN112929581 A CN 112929581A
Authority
CN
China
Prior art keywords
vehicle
image
photograph
video
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110086883.2A
Other languages
Chinese (zh)
Inventor
陈剑峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingyue Digital Information Technology Co ltd
Original Assignee
Lingyue Digital Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lingyue Digital Information Technology Co ltd filed Critical Lingyue Digital Information Technology Co ltd
Priority to CN202110086883.2A priority Critical patent/CN112929581A/en
Publication of CN112929581A publication Critical patent/CN112929581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Abstract

A method, apparatus, and storage medium for processing a photograph or video containing a vehicle are disclosed. The method for processing the photo or video containing the vehicle comprises the following steps: receiving a photograph or video provided by a user containing a first vehicle; identifying an image of the first vehicle in the received photograph or video; generating an image of a second vehicle having a corresponding pose angle compared to the first vehicle based on the 3D model of the second vehicle and/or the panoramic image of the appearance of the second vehicle, wherein the second vehicle is a vehicle of interest to the user; and replacing the image of the first vehicle with the generated image of the second vehicle to generate an updated photograph or video for display to the user.

Description

Method, device and storage medium for processing photos or videos containing vehicles
Technical Field
The present disclosure relates to a method, apparatus, and storage medium for processing a photograph or video containing a vehicle.
Background
Augmented reality technology is known. It is desirable to apply augmented reality technology to vehicle promotion scenarios to enhance user experience.
Disclosure of Invention
It is an object of the present disclosure to provide a new method and apparatus for processing a photograph or video containing a vehicle.
The present disclosure proposes a method of processing a photograph or video containing a vehicle, the method comprising: receiving a photograph or video provided by a user containing a first vehicle; identifying an image of the first vehicle in the received photograph or video; generating an image of a second vehicle having a corresponding pose angle compared to the first vehicle based on the 3D model of the second vehicle and/or the panoramic image of the appearance of the second vehicle, wherein the second vehicle is a vehicle of interest to the user; and replacing the image of the first vehicle with the generated image of the second vehicle to generate an updated photograph or video for display to the user.
Other features and advantages of the present disclosure will become apparent from the following description with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and together with the description, serve to explain, without limitation, the principles of the disclosure. In the drawings, like numbering is used to indicate like items.
Fig. 1 is a block diagram of an example vehicle photo and video processing device, according to some embodiments of the present disclosure.
Fig. 2 is a flow diagram illustrating an example vehicle photo and video processing method according to some embodiments of the present disclosure.
Fig. 3 is a flowchart illustrating exemplary detailed steps of an alternate vehicle image generation process according to some embodiments of the present disclosure.
FIG. 4 illustrates a general hardware environment in which the present disclosure may be applied, according to some embodiments of the present disclosure.
Detailed Description
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the described exemplary embodiments. It will be apparent, however, to one skilled in the art, that the described embodiments may be practiced without some or all of these specific details. In the described exemplary embodiments, well-known structures or processing steps have not been described in detail in order to avoid unnecessarily obscuring the concepts of the present disclosure.
The blocks within each block diagram shown below may be implemented by hardware, software, firmware, or any combination thereof to implement the principles of the present disclosure. It will be appreciated by those skilled in the art that the blocks described in each block diagram can be combined or divided into sub-blocks to implement the principles of the disclosure.
The steps of the methods presented in this disclosure are intended to be illustrative. In some embodiments, the method may be accomplished with one or more additional steps not described and/or without one or more of the steps discussed. Further, the order in which the steps of the method are illustrated and described is not intended to be limiting.
In the present disclosure, the "attitude angle" of the vehicle includes a pitch angle (pitch), a yaw angle (yaw), and a roll angle (roll). The meaning of these angles is known. It should be understood that in some cases, such as where the vehicle is on level ground in a photograph or video provided by the user, then only the yaw angle may be considered as the "attitude angle" of the vehicle.
In the present disclosure, a "panoramic image of the appearance of a vehicle" includes a plurality of images of the vehicle taken by photographing the vehicle from outside the vehicle and from many angles.
Fig. 1 is a block diagram of an example vehicle photo and video processing device 100, according to some embodiments of the present disclosure. As shown in fig. 1, the apparatus 100 may include: a receiving component 110 configured to receive a photograph or video provided by a user containing a first vehicle; a vehicle image recognition component 120 configured to recognize an image of the first vehicle in the received photograph or video; a replacement vehicle image generating part 130 configured to generate an image of a second vehicle having a corresponding attitude angle compared to the first vehicle based on a 3D model of the second vehicle and/or a panoramic image of an appearance of the second vehicle, wherein the second vehicle is a vehicle of interest to the user; a vehicle image replacement component 140 configured to replace the image of the first vehicle with the generated image of the second vehicle to generate an updated photograph or video for display to the user; a light and shadow adjusting part 150 configured to adjust the light and shadow change on the second vehicle and the reflection of the second vehicle so that the light and shadow change and reflection match with the light and shadow change and reflection in the photograph or video; and a changing section 160 configured to change the appearance configuration of the second vehicle, the on-off state of the vehicle lights, the vehicle type, and the scene image in the updated photograph or video. Although not shown in fig. 1, the apparatus 100 may also include a local or remote storage component that may store 3D models of a plurality of vehicles including a second vehicle and/or panoramic images of vehicle appearance (such as 360 ° panoramic images or 720 ° panoramic images), and that may also temporarily buffer updated photographs or videos. The storage component can also store other data or information according to actual needs.
Here, the received photos or videos may be photos or videos of the user's own vehicle, or may be photos and videos of the user's favorite (such as the user's favorite) containing vehicles. Here, the photograph or video may be two-dimensional (2D) or three-dimensional (3D). In the case of a photograph containing depth information, the photograph will be 3D. The second vehicle may be a vehicle that the user is interested in and wants to purchase.
The vehicle photo and video processing apparatus 100 according to the present disclosure may be a terminal device such as a user's smart terminal device provided with a camera and a display screen, or may be a server device such as a server device maintained by a vehicle manufacturer or a dealer. In the present disclosure, the smart terminal device may include: smart phones, tablets, AR (augmented reality) glasses, MR (mixed reality) glasses, and the like.
The operation of the various components shown in fig. 1 will be described in further detail below.
Fig. 2 is a flow diagram illustrating an example vehicle photo and video processing method 200 according to some embodiments of the present disclosure. In the following description, a manner of processing a vehicle photograph is described. It should be understood that the manner of processing the vehicle video would be similar. In particular, each frame in a video may be processed in a manner that processes photographs.
The method 200 begins at step S210, where the receiving component 110 receives a photograph provided by a user containing a first vehicle (i.e., an original photograph). For example, a user may provide a picture of his own vehicle taken at a parking lot at home. Alternatively, the user may provide a picture of his own vehicle taken outdoors (such as on a grassland, at sea, etc.). Still alternatively, the user may provide his favorite photos containing the vehicle. In some cases, at step S210, information provided by the user regarding the model of the first vehicle may also be received.
The method 200 proceeds to step S220, where the vehicle image recognition part 120 recognizes the image of the first vehicle in the received photograph at step S220. It is to be understood that the photograph as the processing object may be a photograph of a subject of a vehicle, for example, a vehicle image occupying more than one third of the area of the photograph. Machine learning techniques may be used to train a model for recognition based on a number of pictures containing vehicles. The trained model may be used to identify an image of the first vehicle from the received photographs.
The method 200 proceeds to step S230, at step S230, the replacement vehicle image generating section 130 generates an image of the second vehicle to be used for replacing the image of the first vehicle. Exemplary detailed steps of the replacement vehicle image generation process are described below with reference to fig. 3.
At step S232, the component 130 generates an image of the second vehicle having a corresponding attitude angle compared to the first vehicle based on the 3D model of the second vehicle and/or the panoramic image of the appearance of the second vehicle. For example, component 130 may generate an image of a second vehicle having substantially the same attitude angle as the first vehicle. Here, "substantially the same" means: the angular difference is no more than 10% of the angle of the first vehicle in terms of any of pitch angle, yaw angle, and roll angle. The image of the second vehicle may be generated by feature matching or similarity calculation.
In some embodiments, the characteristics of the critical components of the first vehicle may be identified by way of image recognition. The key component refers to a component with high identification, such as a car lamp, a bumper, a front windshield, a door glass, a wheel hub, a rearview mirror and the like. By matching the features of the identified one or more key components with the features of corresponding key components of the 3D model of the second vehicle having a plurality of attitude angles, an image of the second vehicle having the corresponding attitude angles can be generated. Specifically, an image of the second vehicle having a corresponding attitude angle may be generated from the 3D model of the second vehicle whose degree of matching (or proximity) is the highest. Here, the features of one or more key components include: features of a single critical component (such as contour features) and relative positional relationship features between multiple critical components.
In still other embodiments, the image of the second vehicle having the corresponding attitude angle may be generated by calculating a similarity between the image of the first vehicle and a plurality of images photographed from a plurality of angles among the panoramic image of the appearance of the second vehicle. Specifically, the image with the highest calculated similarity may be generated as the image of the second vehicle with the corresponding attitude angle.
It should be understood that in the case where the first vehicle is located on a level ground, only the yaw angle may be considered as the attitude angle of the first vehicle. In other words, in the case where the first vehicle is located on a horizontal ground, only the orientation of the vehicle head (-180 ° +180 °) may be considered as the attitude angle of the first vehicle.
At step S234, the component 130 may cause the movable component of the second vehicle to have a corresponding state as compared to the state of the movable component of the first vehicle. The movable components of the vehicle may include doors, trunk, sunroof, etc. The state of the movable member may include an active state of the movable member, and more specifically, may include an on-off state and an on-off degree of the movable member. Similarly, the image of the second vehicle may be obtained by means of feature matching or by means of similarity calculation as described above. This requires that the 3D models of the movable parts of the second vehicle in different active states and/or panoramic images of the appearance of the movable parts in different active states be stored in advance. For example, where one or more doors of a first vehicle are open, the component 130 may cause the corresponding one or more doors of a second vehicle to have the same open and closed state and degree in the generated image of the second vehicle. It should be understood that in some embodiments, step S232 and step S234 may be performed simultaneously.
At step S236, component 130 identifies the color and/or texture of the vehicle paint of the first vehicle in the received photograph and causes the color and/or texture of the vehicle paint of the second vehicle in the generated image of the second vehicle to approximate the color and/or texture of the vehicle paint of the first vehicle. For example, component 130 may be such that the second vehicle has the same color and/or texture of the paint as the first vehicle. Alternatively, the component 130 may cause the color and/or texture of the paint of the second vehicle to approximate the color and/or texture of the first vehicle. Here, the color proximity means that, for example, two colors belong to the same color family. Here, the texture of the vehicle paint refers to a pattern or line on the surface of the vehicle paint. Further, texture proximity means, for example, that two textures look similar.
It is understood that by having the generated image of the second vehicle have a corresponding attitude angle, a corresponding movable state of the movable member (an active state of a door, trunk, sunroof, or the like), and a corresponding paint color and/or texture to the first vehicle in the photograph provided by the user, the user can be made familiar with and good feeling with the image of the second vehicle when the generated image of the second vehicle is presented to the user.
Next, the method 200 proceeds to step S240, at which step S240 the vehicle image replacement component 140 replaces the image of the first vehicle with the generated image of the second vehicle to generate an updated photograph (enhanced photograph) for display to the user. The component 140 may remove the image of the first vehicle from the original photograph and then fill the image of the second vehicle into the blank portion of the original photograph, thereby generating an enhanced photograph. Alternatively, the component 140 may superimpose the image of the second vehicle directly onto the image of the first vehicle in the original photograph, thereby generating an enhanced photograph. It should be understood that the edge blurring and smoothing process for the second vehicle may be performed as needed.
In some embodiments, where the image of the second vehicle is superimposed on the image of the first vehicle, the component 140 enables the following display: alternately displaying an image of the first vehicle and an image of the second vehicle, such as when the user presses the display screen with a finger and when the user's finger leaves the display screen; and simultaneously displaying the image of the first vehicle and the image of the second vehicle in a semi-transparent manner, such as may be achieved when a user double clicks on the display screen with a finger. Further, in the case where images of two vehicles are displayed simultaneously in a translucent manner, the difference in shape and/or size of the corresponding parts of the first vehicle and the second vehicle may be highlighted. For example, differences in the length, width, and height of the first vehicle and the second vehicle may be highlighted, differences in the shape and size of the hubs of the first vehicle and the second vehicle may be highlighted, and so on. Here, the highlighting may be achieved by marking the difference portion with a conspicuous color. It should be appreciated that for ease of comparison, it is possible to have both vehicles on the same ground and have the center positions of both vehicles (the midpoint position between the nose and the tail of the vehicle and between the roof and the ground) close to or coincide.
The shape and dimensions of the components of the first vehicle, such as the body, hub, etc., can be known in various ways. As previously mentioned, the model number of the first vehicle provided by the user may be received. The shape and size of each component of the first vehicle can be determined depending on the model. Further, from the 3D model of the second vehicle, the shape and size of the components of the second vehicle can be determined. Thus, a comparison with respect to shape and size is made possible. It is to be understood that dimensional data for the components of the first and second vehicles may be displayed, for example, by pop-up windows, as desired.
By alternately or simultaneously displaying images of the first vehicle and the second vehicle, the user can intuitively feel the difference between the two vehicles. Further, by highlighting differences in shape and/or size, a user can be helped to recognize differences between particular components of two vehicles.
Next, the method 200 proceeds to step S250, where the light and shadow adjustment component 150 adjusts the light and shadow change on the second vehicle and the reflection of the second vehicle so that the light and shadow change and reflection match the light and shadow change and reflection in the photograph.
In some embodiments, component 150 may identify a shading caused by lighting conditions on an image of a first vehicle and may apply the identified shading to an image of a second vehicle. For example, the component 150 may apply highlights and shadows on the image of the first vehicle to corresponding locations of the image of the second vehicle. In other embodiments, component 150 may identify directions of the reflection of the first vehicle, such as up, down, left, right, upper left, lower left, upper right, lower right, and directly below directions, and replace the reflection of the first vehicle with the reflection of the same direction of the second vehicle. The reflection of the second vehicle in each direction may be pre-stored.
It should be understood that original photographs taken under different natural environments and at different times may have varying light and shadow scenarios. By processing the change of light and shadow and the reflection on the second vehicle, the updated photo is more vivid and natural.
Although not shown, the method 200 may also include the step of displaying the updated photograph to the user. The updated photos may be displayed via the user's smart terminal device. It is to be understood that in case of displaying updated photos using AR glasses or MR glasses, a 3D display of the photos may be provided.
Next, the method 200 proceeds to step S260, where the modifying component 160 may modify at least one of the following items: an appearance configuration of the second vehicle; an on-off state of a lamp of the second vehicle; a scene image; and a model of the second vehicle. The change operation may be performed in response to a user request. Alternatively, the change operation may be performed automatically. For example, a graphical user interface may be displayed superimposed over the updated photograph while the updated photograph is displayed for the user to initiate the user request. For example, a plurality of touch buttons representing the respective modification items may be displayed superimposed on the updated photograph for the user to click on. For example, via the provision of a plurality of touch buttons, the appearance configuration of the second vehicle can be altered, such as colour, texture, whether a roof rack is provided or not, etc.; the vehicle lamp can be turned on or off; scene images in the original photo can be changed, for example, parking lot scenes in the original photo are changed into grassland scenes, seaside scenes and the like; and the model of the second vehicle can be changed. By performing these alteration operations, a richer experience can be brought to the user. It should be appreciated that in the event that the model of the second vehicle is modified, the method returns to step S230. At step S230, the component 130 regenerates the image of the second vehicle to be used for replacing the image of the first vehicle according to the modified vehicle type.
The processing of the vehicle photograph is described above. The manner of processing the vehicle video may be similar. In the case of processing a vehicle video, the content replacement processing and the light and shadow adjustment processing can be in real time, and thus the processed video can be displayed to the user in real time while receiving the video.
An exemplary vehicle photo and video processing method 200 is described above with reference to fig. 2 and 3. It should be understood that the method 200 may be performed by the smart terminal device or the server device alone. For example, method 200 may be performed offline by a user's smartphone or a tablet in a vehicle exhibit. Alternatively, the method 200 may be executed by the smart terminal device and the server device in cooperation. In this case, the smart terminal device and the server device each perform a portion of the method 200.
The vehicle photo and video processing method and apparatus of the present disclosure can generate a photo of a vehicle of interest to a user from a vehicle photo provided by the user, which makes it easy for the user to get familiar and good with the vehicle of interest. Moreover, in the process of vehicle selection by the user, various appearance configurations of the interested vehicle can be easily seen, so that the convenience and the interestingness of vehicle selection by the user are increased.
Hardware implementation
Fig. 4 illustrates a general hardware environment 400 in which the present disclosure may be applied, according to an exemplary embodiment of the present disclosure.
Referring to fig. 4, a computing device 400 will now be described as an example of a hardware device applicable to aspects of the present disclosure. Computing device 400 may be any machine configured to perform processing and/or computing, and may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant, a smart phone, a portable camera, or any combination thereof. The apparatus 100 described above may be implemented in whole or at least in part by a computing device 400 or similar device or system.
Computing device 400 may include elements capable of connecting with bus 402 or communicating with bus 402 via one or more interfaces. For example, computing device 400 may include a bus 402, one or more processors 404, one or more input devices 406, and one or more output devices 408. The one or more processors 404 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (such as special purpose processing chips). Input device 406 may be any type of device capable of inputting information to a computing device and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote control. Output device 408 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, and/or a printer. Computing device 400 may also include or be connected with non-transitory storage device 410, non-transitory storage device 410 may be any storage device that is non-transitory and that may implement a data storage library, and may include, but is not limited to, disk drives, optical storage devices, solid state storage, floppy disks, flexible disks, hard disks, tapes or any other magnetic medium, compact disks or any other optical medium, ROM (read only memory), RAM (random access memory), cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code. Non-transitory storage device 410 may be detachable from the interface. The non-transitory storage device 410 may have data/instructions/code for implementing the above-described methods and steps. Computing device 400 may also include a communication device 412. The communication device 412 may be any type of device or system capable of communicating with external devices and/or with a network, and may include, but is not limited to, a modem, a network card, a network,Infrared communication device, wireless communication equipment and/or devices such as bluetoothTMDevices, 802.11 devices, WiFi devices, WiMax devices, cellular communications facilities, and the like.
The bus 402 may include, but is not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA (eisa) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Computing device 400 may also include a working memory 414, working memory 414 may be any type of working memory that can store instructions and/or data useful for the operation of processor 404 and may include, but is not limited to, random access memory and/or read only memory devices.
Software elements may reside in the working memory 414 including, but not limited to, an operating system 416, one or more application programs 418, drivers, and/or other data and code. Instructions for performing the above-described methods and steps may be included in one or more applications 418, and the above-described components of apparatus 100 may be implemented by processor 404 reading and executing the instructions of one or more applications 418. More specifically, the replacement vehicle image generation component 130 may be implemented, for example, by the processor 404 when executing the application 418 having instructions to perform step S230 (or steps S232, S234, and S236). The vehicle image replacement component 140 may be implemented, for example, by the processor 404 upon execution of the application 418 having instructions to perform step S240. The shadow adjustment component 150 may be implemented, for example, by the processor 404 when executing the application 418 with instructions to perform step S250. Similarly, the receiving component 110, the vehicle image recognition component 120, the modifying component 160 may be implemented, for example, by the processor 404 when executing the application 418 with instructions to perform steps S210, S220, S260, respectively. Executable or source code for the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as storage device(s) 410 described above, and may be read into working memory 414, where compilation and/or installation is possible. Executable code or source code for the instructions of the software elements may also be downloaded from a remote location.
From the above embodiments, it is apparent to those skilled in the art that the present disclosure can be implemented by software and necessary hardware, or can be implemented by hardware, firmware, and the like. Based on this understanding, embodiments of the present disclosure may be implemented partially in software. The computer software may be stored in a computer readable storage medium, such as a floppy disk, hard disk, optical disk, or flash memory. The computer software includes a series of instructions that cause a computer (e.g., a personal computer, a service station, or a network terminal) to perform a method or a portion thereof according to various embodiments of the disclosure.
Having thus described the disclosure, it will be apparent that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.

Claims (12)

1. A method of processing a photograph or video containing a vehicle, comprising:
receiving a photograph or video provided by a user containing a first vehicle;
identifying an image of the first vehicle in the received photograph or video;
generating an image of a second vehicle having a corresponding pose angle compared to the first vehicle based on the 3D model of the second vehicle and/or the panoramic image of the appearance of the second vehicle, wherein the second vehicle is a vehicle of interest to the user; and
the image of the first vehicle is replaced with the generated image of the second vehicle to generate an updated photograph or video for display to the user.
2. The method of claim 1, further comprising: the change in lighting and reflection on the second vehicle and the reflection of the second vehicle are adjusted so that the change in lighting and reflection matches the change in lighting and reflection in the photograph or video.
3. The method of claim 1, wherein generating the image of the second vehicle further comprises: an image of the second vehicle is generated in which the movable component of the second vehicle has a corresponding state as compared to the state of the movable component of the first vehicle.
4. The method of claim 1, generating the image of the second vehicle further comprising: the color and/or texture of the vehicle paint of the first vehicle in the received photograph or video is identified and the color and/or texture of the vehicle paint of the second vehicle is brought to approximate the color and/or texture of the vehicle paint of the first vehicle in the generated image of the second vehicle.
5. The method of claim 1, wherein replacing the image of the first vehicle with the determined image of the second vehicle comprises: the image of the second vehicle is superimposed on the image of the first vehicle so that the image of the first vehicle and the image of the second vehicle can be displayed in a translucent manner.
6. The method of claim 5, further comprising: highlighting the shape and/or size differences of the corresponding components of the first vehicle and the second vehicle.
7. The method of claim 1, wherein generating the image of the second vehicle comprises:
an image of the second vehicle having a plurality of attitude angles is generated based on a degree of matching of features of key components of the first vehicle to features of corresponding key components of the 3D model of the second vehicle having the corresponding attitude angles.
8. The method of claim 1, wherein generating the image of the second vehicle comprises:
an image of the second vehicle having a corresponding attitude angle is generated based on a similarity between the image of the first vehicle and a plurality of images photographed from a plurality of angles among the panoramic image of the appearance of the second vehicle.
9. The method of claim 1, further comprising:
altering the appearance configuration of the second vehicle in the updated photograph or video;
switching a switching state of a lamp of the second vehicle in the updated photograph or video;
changing the scene image in the updated photograph or video; and/or
The model of the second vehicle in the updated photograph or video is changed.
10. An apparatus for processing a photograph or video containing a vehicle, comprising:
at least one processor; and
at least one storage device storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the method of any one of claims 1-9.
11. A computer program product comprising instructions which, when executed by a processor, cause the method according to any of claims 1-9 to be performed.
12. A non-transitory computer-readable storage medium having stored thereon instructions which, when executed by a processor, cause performance of the method recited in any one of claims 1-9.
CN202110086883.2A 2021-01-22 2021-01-22 Method, device and storage medium for processing photos or videos containing vehicles Pending CN112929581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110086883.2A CN112929581A (en) 2021-01-22 2021-01-22 Method, device and storage medium for processing photos or videos containing vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110086883.2A CN112929581A (en) 2021-01-22 2021-01-22 Method, device and storage medium for processing photos or videos containing vehicles

Publications (1)

Publication Number Publication Date
CN112929581A true CN112929581A (en) 2021-06-08

Family

ID=76164609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110086883.2A Pending CN112929581A (en) 2021-01-22 2021-01-22 Method, device and storage medium for processing photos or videos containing vehicles

Country Status (1)

Country Link
CN (1) CN112929581A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150106133A1 (en) * 2013-10-15 2015-04-16 Audatex North America, Inc. Mobile system for generating a damaged vehicle insurance estimate
CN108292444A (en) * 2016-01-11 2018-07-17 微软技术许可有限责任公司 Update mixed reality thumbnail
CN108446310A (en) * 2018-02-05 2018-08-24 优视科技有限公司 Virtual streetscape map generation method, device and client device
CN108701352A (en) * 2016-03-23 2018-10-23 英特尔公司 Amending image using the identification based on three dimensional object model and enhancing
CN110213521A (en) * 2019-05-22 2019-09-06 创易汇(北京)科技有限公司 A kind of virtual instant communicating method
CN110383341A (en) * 2017-02-27 2019-10-25 汤姆逊许可公司 Mthods, systems and devices for visual effect
CN212115515U (en) * 2020-04-12 2020-12-08 广州通达汽车电气股份有限公司 Panoramic all-round display system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150106133A1 (en) * 2013-10-15 2015-04-16 Audatex North America, Inc. Mobile system for generating a damaged vehicle insurance estimate
CN108292444A (en) * 2016-01-11 2018-07-17 微软技术许可有限责任公司 Update mixed reality thumbnail
CN108701352A (en) * 2016-03-23 2018-10-23 英特尔公司 Amending image using the identification based on three dimensional object model and enhancing
CN110383341A (en) * 2017-02-27 2019-10-25 汤姆逊许可公司 Mthods, systems and devices for visual effect
CN108446310A (en) * 2018-02-05 2018-08-24 优视科技有限公司 Virtual streetscape map generation method, device and client device
CN110213521A (en) * 2019-05-22 2019-09-06 创易汇(北京)科技有限公司 A kind of virtual instant communicating method
CN212115515U (en) * 2020-04-12 2020-12-08 广州通达汽车电气股份有限公司 Panoramic all-round display system

Similar Documents

Publication Publication Date Title
CN113287118A (en) System and method for face reproduction
US8847974B2 (en) Display processing apparatus
CN115699114A (en) Image augmentation for analysis
CN109448050B (en) Method for determining position of target point and terminal
US20190188534A1 (en) Methods and systems for converting a line drawing to a rendered image
US20200304713A1 (en) Intelligent Video Presentation System
CN114428577A (en) Vehicle-mounted interaction method, vehicle-mounted interaction terminal and vehicle-mounted system
WO2022218042A1 (en) Video processing method and apparatus, and video player, electronic device and readable medium
CN113822798B (en) Method and device for training generation countermeasure network, electronic equipment and storage medium
US11823433B1 (en) Shadow removal for local feature detector and descriptor learning using a camera sensor sensitivity model
JP2024041895A (en) Modular image interpolation method
US11232616B2 (en) Methods and systems for performing editing operations on media
CN109426522A (en) Interface processing method, device, equipment, medium and the operating system of mobile device
CN112929581A (en) Method, device and storage medium for processing photos or videos containing vehicles
CN112954291B (en) Method, device and storage medium for processing 3D panoramic image or video of vehicle
CN112905005A (en) Adaptive display method and device for vehicle and storage medium
CN111107264A (en) Image processing method, image processing device, storage medium and terminal
WO2023050677A1 (en) Vehicle, image capture method and apparatus, device, storage medium, and computer program product
CN111383289A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN115619904A (en) Image processing method, device and equipment
CN112601029B (en) Video segmentation method, terminal and storage medium with known background prior information
US11842236B2 (en) Colored visual markers for variable use
CN113763233A (en) Image processing method, server and photographing device
CN112740264A (en) Design for processing infrared images
CN111563956A (en) Three-dimensional display method, device, equipment and medium for two-dimensional picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination