CN117745525A - Image processing method, device and equipment for vehicle and readable storage medium - Google Patents

Image processing method, device and equipment for vehicle and readable storage medium Download PDF

Info

Publication number
CN117745525A
CN117745525A CN202311788782.5A CN202311788782A CN117745525A CN 117745525 A CN117745525 A CN 117745525A CN 202311788782 A CN202311788782 A CN 202311788782A CN 117745525 A CN117745525 A CN 117745525A
Authority
CN
China
Prior art keywords
image
vehicle
vehicle configuration
pixel
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311788782.5A
Other languages
Chinese (zh)
Inventor
赵强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avatr Technology Chongqing Co Ltd
Original Assignee
Avatr Technology Chongqing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avatr Technology Chongqing Co Ltd filed Critical Avatr Technology Chongqing Co Ltd
Priority to CN202311788782.5A priority Critical patent/CN117745525A/en
Publication of CN117745525A publication Critical patent/CN117745525A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an image processing method, device and equipment for a vehicle and a readable storage medium, wherein the method comprises the following steps: acquiring a plurality of vehicle configuration images, each vehicle configuration image including appearance information of one accessory of the vehicle; creating an image container based on pixel information of an image of a preset usage scene; sequentially adding a plurality of vehicle configuration images into an image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image; and outputting the target vehicle image to be displayed in a preset use scene. In this way, according to the preset image layer stacking sequence, each vehicle configuration image is sequentially added into the image container created based on the pixel information of the image of the preset use scene for fusion processing, so that multiple target vehicle images of different vehicle configuration image combinations can be automatically generated, and the output efficiency of the target vehicle images is improved.

Description

Image processing method, device and equipment for vehicle and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, device, and computer readable storage medium for a vehicle.
Background
In the e-commerce sales system, the buyer selects the vehicle configuration randomly, each type of vehicle configuration is selected once, and each type of vehicle configuration is possibly selected, so that the e-commerce vehicle sales system needs to present a corresponding group diagram for the user to decide according to the vehicle configuration selected by the buyer.
To cope with such random selection, in the related art, a photo designer manually designs various groups of pictures in advance, and manually enters the system through multiple operations. The change in one set of graphs relative to another Zhang Zutu may be a small area, requiring the designer to redesign the entire graph, the overall process is very labor intensive, and excessive human involvement and error-prone.
Disclosure of Invention
In view of this, the embodiment of the application provides a vehicle image processing method, which avoids the manual participation in the image forming process of multiple vehicle configuration images and improves the yield efficiency of target vehicle images.
The technical scheme is realized as follows:
the embodiment of the application provides an image processing method of a vehicle, which comprises the following steps:
acquiring a plurality of vehicle configuration images, each vehicle configuration image including appearance information of one accessory of the vehicle;
Creating an image container based on pixel information of an image of a preset usage scene; the image container is used for bearing the plurality of vehicle configuration images;
sequentially adding the plurality of vehicle configuration images into the image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image;
and outputting the target vehicle image to be displayed in the preset use scene. .
An embodiment of the present application provides an image processing apparatus for a vehicle, including:
a first acquisition module that acquires a plurality of vehicle configuration images each including appearance information of one accessory of the vehicle;
a first creating module for creating an image container based on pixel information of an image of a preset usage scene; the image container is used for bearing the plurality of vehicle configuration images;
the first fusion processing module is used for sequentially adding the plurality of vehicle configuration images into the image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image;
the first output module is used for outputting the target vehicle image so as to display the target vehicle image in the preset use scene.
An embodiment of the present application provides an image processing apparatus of a vehicle, including: a memory and a processor;
the memory stores a computer program capable of running on the processor, and when the processor executes the computer program, the image processing method of the vehicle provided by the embodiment of the application is realized.
The embodiment of the application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method of the vehicle provided by the embodiment of the application.
The embodiment of the application provides a vehicle image processing method, device, equipment and computer readable storage medium, and by adopting the technical scheme, a plurality of vehicle configuration images are acquired, and each vehicle configuration image comprises appearance information of a fitting of a vehicle; creating an image container based on pixel information of an image of a preset usage scene; sequentially adding a plurality of vehicle configuration images into an image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image; and outputting the target vehicle image to be displayed in a preset use scene. In this way, according to the preset image layer stacking sequence, each vehicle configuration image is sequentially added into the image container created based on the pixel information of the image of the preset use scene for fusion processing, so that multiple target vehicle images of different vehicle configuration image combinations can be automatically generated, and the output efficiency of the target vehicle images is improved.
Drawings
Fig. 1 is a flow chart of an image processing method of a vehicle according to an embodiment of the present application;
fig. 2 is a flowchart of a method for determining pixel position information of a vehicle configuration image according to an embodiment of the present application;
fig. 3 is a flow chart of a method for generating and designing a high-efficiency vehicle group diagram according to an embodiment of the present application;
fig. 4 is a schematic diagram of a vehicle-mounted model according to an embodiment of the present application;
FIG. 5 is a schematic illustration of another vehicle-to-vehicle model provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a plurality of vehicle-to-vehicle pattern diagrams according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of the composition structure of an image processing apparatus for a vehicle according to an embodiment of the present application;
fig. 8 is a schematic diagram of the composition structure of an image processing apparatus for a vehicle according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments\other embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments\other embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with each other without conflict.
In the following description, the terms "first", "second", and the like are merely used to distinguish between similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", or the like may be interchanged with a particular order or precedence, as permitted, to enable embodiments of the present application described herein to be implemented in an order other than that illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
In the e-commerce vehicle sales system, the vehicle group diagram is a vehicle appearance picture which is presented to a buyer through a mobile phone App, a browser and an applet, and the vehicle group diagram shows the real appearance of a vehicle under an angle in detail. A buyer can clearly see the appearance parts such as the colors of the vehicle body, the lines of the vehicle body, the hubs, the calipers, the front face, the tail parts and the like, and the interior decoration parts such as the central control layout, the seat colors, the seat layout and the like through the group diagram, and the vehicle group diagram provides accurate and detailed references for the user to order the vehicle.
In general, a vehicle will have a body color, B body exterior trim, C wheel hub patterns, D caliper patterns, E front face patterns, F tail patterns, G seat colors, H seat layouts, and I light patterns, which are called vehicle configurations, and any one set of vehicle configuration relationships in ABCDEFGHI corresponds to one vehicle group diagram, so there will be a group diagram combination of a, B, C, D, E, G, H, i=n (N is equal to infinity). If the picture designer manually designs the N vehicle group diagrams, the workload is very great, which brings great challenges to the designer.
Based on the problems existing in the related art, the embodiment of the application provides a vehicle image processing method, which can automatically generate various target vehicle images combined by different vehicle configuration images and improve the output efficiency of the target vehicle images. As shown in fig. 1, a flow chart of a vehicle image processing method according to an embodiment of the present application is provided, and the method includes the following steps:
s101, acquiring a plurality of vehicle configuration images.
It should be noted that, each vehicle configuration image includes appearance information of one accessory of the vehicle, that is, one vehicle configuration image includes appearance information of only one accessory, and the accessory of the vehicle may be a component of the vehicle, for example, a vehicle frame, a hub, a vehicle body, a seat, and the like, and the appearance information may include a shape, a style, a color, and the like of the accessory.
Illustratively, the appearance information of the vehicle frame includes a model, a shape, and the like of the vehicle frame; the appearance information of the hub comprises the shape, the color and the like of the wheel; the appearance information of the vehicle body includes the color of the vehicle body, etc.; the appearance information of the seat includes the style, color, etc. of the seat. The fittings of the vehicle and the appearance information of the fittings are only exemplary, and the present application is not limited thereto.
In some embodiments, the vehicle configuration image may be split or extracted from an existing vehicle image, which may be stored in a database, or an available vehicle image designed in real time by a photo designer, or published on the internet, etc., and may include appearance information of a plurality of vehicle accessories, and by splitting or extracting the existing vehicle image from the appearance information of the accessories, a plurality of vehicle configuration images including single appearance information of the accessories may be obtained.
S102, creating an image container based on pixel information of an image of a preset usage scene.
In some embodiments, the preset usage scenario may be a usage scenario such as a poster, a user manual, user internet marketing, etc., by displaying a vehicle image for vehicle promotion, instructions for use, sales, etc. The pixel information of the required images in the different preset usage scenarios, which may include the number of pixels, pixel density, resolution, etc., is different, e.g. the resolution for the images presented by the poster is larger than the resolution of the images presented in the user manual.
In some embodiments, an existing image (which may not be a vehicle image) of the preset usage scene may be acquired, pixel information in the existing image is determined, and an image container identical to the pixel information is created, the image container being used to carry a plurality of vehicle configuration images. The image container may be a text box, canvas, etc. represented by pixels, the resolution of the image container is the same as the resolution of the image in the preset usage scene, and the image container may be rectangular.
S103, based on a preset graph layer stacking sequence, sequentially adding a plurality of vehicle configuration images into an image container for fusion processing to obtain a target vehicle image.
In some embodiments, the preset map layer stacking order may be a pre-established order of addition of the respective accessory images to the image container. The visual contact difficulty of the accessory of the vehicle watched by the user at a preset angle can be determined according to the appearance structure of the vehicle; and determining the stacking sequence of the preset map layers according to the visual contact difficulty. The preset angle can be any angle at which a user hangs the vehicle, corresponding vehicle configuration images can be sequentially added to the image container according to the sequence from difficult to easy of the fittings with visual contact, namely, the vehicle configuration images of the fittings with difficult visual contact can be added to the bottom layer of the image container, and the vehicle configuration images of the fittings with easy visual contact can be added to the upper layer of the image container.
In some embodiments, a plurality of vehicle configuration images are sequentially added to an image container for fusion processing, and fusion may be performed based on pixel position information of each vehicle configuration image added to the image container, thereby obtaining a target vehicle image. The pixel location information may include pixel point coordinates, total number of pixels, etc. of the vehicle configuration image relative to the image container. Fusion may include combining, compressing, stitching, etc. processing means.
S104, outputting a target vehicle image to be displayed in a preset use scene.
In some embodiments, the output of the target vehicle image may be printing out, publishing the target vehicle image to the internet or sending it to other devices for display for poster presentation, or manual instruction, or user internet marketing, etc.
In an embodiment of the present application, a plurality of vehicle configuration images are acquired, each vehicle configuration image including appearance information of one accessory of a vehicle; creating an image container based on pixel information of an image of a preset usage scene; sequentially adding a plurality of vehicle configuration images into an image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image; and outputting the target vehicle image to be displayed in a preset use scene. In this way, according to the preset image layer stacking sequence, each vehicle configuration image is sequentially added into the image container created based on the pixel information of the image of the preset use scene for fusion processing, so that multiple target vehicle images of different vehicle configuration image combinations can be automatically generated, and the output efficiency of the target vehicle images is improved.
In some embodiments of the present application, as shown in fig. 2, before the step S103, a plurality of vehicle configuration images are sequentially added to the image container based on the preset map layer stacking sequence to perform the fusion processing, so as to obtain the target vehicle image, the following steps S201 to S203 may be further performed, and each step is described below.
S201, acquiring first pixel information of a plurality of vehicle configuration images.
In some embodiments, the first pixel information may include information of a total number of pixels, a resolution, etc. of the respective vehicle configuration information. The first pixel information of the different vehicle configuration images may be the same or different. The first pixel information of the vehicle configuration image can be obtained while the vehicle configuration image is obtained, or the pixels of the vehicle configuration image are analyzed and processed, so that the corresponding first pixel information is obtained.
S202, creating second pixel information of each accessory image of the vehicle relative to the image container based on the pixel information of the image container.
In some embodiments, the pixel information of the image container is the same as the pixel information of the image of the preset usage scene. The pixel information of the image container includes resolution of the image container, total number of pixels, number of pixels in horizontal direction, number of pixels in vertical direction, and the like, and the second pixel information includes coordinates of a center pixel point of the accessory image of the vehicle, number of pixels occupied by the accessory image, and the like. The accessory image may refer generally to an image of an accessory including a vehicle, and the accessory image may be a virtual image, not an actually obtained vehicle configuration image.
In some embodiments, when an image container is created, a center pixel point coordinate of each accessory image of the vehicle, a number of pixels occupied by each accessory image, and the like may be determined in the image container, so as to obtain second pixel information of each accessory image relative to the image container, the second pixel information being different from the first pixel information. The second pixel information of the different accessory images may be the same or different.
S203, determining pixel position information of the corresponding vehicle configuration image relative to the image container according to the first pixel information and the second pixel information.
In some embodiments, the pixel location information may include a center pixel coordinate of the vehicle configuration image, an occupied pixel coordinate, the number of pixels in the horizontal direction, the number of pixels in the vertical direction, and the like. The pixel position information of the vehicle configuration image with respect to the image container may be determined from the first pixel information of the vehicle configuration image and the second pixel information of the accessory image of the vehicle determined at the time of creating the image container.
In some embodiments of the present application, the first pixel information includes a first total number of pixels of the plurality of vehicle configuration images, and the second pixel information includes a second total number of pixels of each accessory image and the center pixel point coordinates. Based on this, the pixel position information of the corresponding vehicle configuration image with respect to the image container is determined from the first pixel information and the second pixel information, that is, the above-described step S203 may be implemented by the following steps S2031 to S2034, which are respectively described below.
S2031, acquiring the second pixel total number and the center pixel point coordinates of the same first accessory image as the accessory in the ith vehicle configuration image.
I is greater than 0 and less than or equal to N, which is the total number of vehicle configuration images. The first accessory image may be the same accessory image as the accessory in the i-th vehicle configuration image, and the first accessory image may be any one of the plurality of vehicle configuration images. The second total number of pixels is the total number of pixels occupied by the first accessory image.
In some embodiments, configuration parameters of each accessory of the vehicle may be obtained in advance, and based on the obtained configuration parameters of each accessory in advance, an association relationship between at least one configuration parameter of the same accessory and a corresponding vehicle configuration image is created. The configuration parameters are used for describing appearance information of various accessories, and one accessory comprises one or more configuration parameters, such as length, width and height ratio parameters of a vehicle, tail wing parameters, four seats, five seats or seven seats parameters, interior color parameters, hub size and style parameters, caliper color parameters, skylight modeling parameters, lamplight modeling parameters and the like.
In some embodiments, after the association relationship between the configuration parameters of the accessory and the corresponding vehicle configuration image is established, the accessory contained in the vehicle configuration image may be determined according to the configuration parameters of the vehicle configuration image, so as to obtain the second pixel total number and the center pixel point coordinate of the first accessory image which are the same as those of the accessory in the vehicle configuration image.
S2032, if the total number of the first pixels of the ith vehicle configuration image and the total number of the second pixels of the first accessory image are different, adjusting the pixel density of the ith vehicle configuration image so that the total number of the pixels of the ith vehicle configuration image after adjustment is the same as the total number of the second pixels of the first accessory image.
In some embodiments, the first total number of pixels is the number of pixels occupied by the ith vehicle configuration image, and if the first total number of pixels of the ith vehicle configuration image is greater than the second total number of pixels of the first accessory image, the pixel density of the ith vehicle configuration image may be reduced by a downsampling process, for example, from 100 pixels/inch to 80 pixels/inch, such that the adjusted total number of pixels of the ith vehicle configuration image is the same as the second total number of pixels of the first accessory image.
In other embodiments, if the first pixel count of the ith vehicle configuration image is less than the second pixel count of the first accessory image, the pixel density of the ith vehicle configuration image may be increased by an upsampling process, for example, from 50 pixels per inch to 100 pixels per inch, such that the adjusted pixel count of the ith vehicle configuration image is the same as the second pixel count of the first accessory image.
S2033, acquiring pixel density and size information of the adjusted i-th vehicle configuration image.
In some embodiments, the size information of the adjusted i-th vehicle configuration image may include a width (horizontal line of defense) and a height (vertical direction) of the adjusted i-th vehicle configuration image. The width of the i-th vehicle configuration image after adjustment may be larger than the width of the i-th vehicle configuration image before adjustment, and the height of the i-th vehicle configuration image after adjustment may be larger than the height of the i-th vehicle configuration image before adjustment; the width of the i-th vehicle arrangement image after adjustment may be smaller than the width of the i-th vehicle arrangement image before adjustment, and the height of the i-th vehicle arrangement image after adjustment may be smaller than the height of the i-th vehicle arrangement image before adjustment.
S2034, determining pixel position information of the ith vehicle configuration image with respect to the image container based on the pixel density and size information of the ith vehicle configuration image after adjustment and the center pixel point coordinates of the first accessory image.
In some embodiments, the number of pixels of the adjusted ith vehicle configuration image in the horizontal direction in the image container, and the pixel density and the height of the ith vehicle configuration image may be determined according to the pixel density and the width of the adjusted ith vehicle configuration image, and the number of pixels of the adjusted ith vehicle configuration image in the vertical direction in the image container may be determined. And determining pixel position information of the ith vehicle configuration image relative to the image container in combination with the central pixel point coordinates of the first accessory image.
It can be understood that, under the condition that the first pixel information of the vehicle configuration image and the second pixel information of the corresponding first accessory image are different, the pixel density of the vehicle configuration image is adjusted, so that the pixel information of the adjusted vehicle configuration image is the same as the second pixel information of the first accessory, the target vehicle image finally obtained by fusion is ensured to accord with a preset use scene, the automatic adjustment of the pixels and the sizes of the vehicle configuration image is realized, the error result is controlled at a pixel level, and the error of image fusion is reduced.
In other embodiments, if the total number of the first pixels of the ith vehicle configuration image is the same as the total number of the second pixels of the first accessory image, the pixel position information of the ith vehicle configuration image may be determined directly according to the first pixel information and the size information of the ith vehicle configuration image and the center point pixel coordinates of the first accessory image.
In some embodiments of the present application, after obtaining the pixel position information of each vehicle configuration image, the first vehicle configuration image and the second vehicle configuration image may be fused based on the pixel position information of the first vehicle configuration image and the pixel position information of the second vehicle configuration image, to obtain a first vehicle fusion image.
In some embodiments, the center positions of the first vehicle configuration image and the second vehicle configuration image may be determined based on the center pixel point coordinates of the first vehicle configuration image and the center pixel point coordinates of the second vehicle configuration image; and sequentially filling pixels according to the pixel point coordinates occupied by the first vehicle configuration image and the second vehicle configuration image, so as to obtain a first vehicle fusion image. The first vehicle fusion image contains information of two vehicle configuration images.
In some embodiments, the first vehicle configuration image is a first vehicle configuration image added to the image container determined according to a preset map layer stacking order, and the second vehicle configuration image is a second vehicle configuration image added to the image container determined according to a preset map layer stacking order.
In one possible implementation, the first vehicle configuration image may be a chassis style map of the vehicle and the second vehicle configuration image may be a seat layout map of the vehicle; in another possible implementation, the first vehicle profile may be a vehicle frame profile and the second vehicle profile may be a vehicle body color profile. The first vehicle configuration image and the second vehicle configuration image are merely exemplary illustrations, and the present application is not limited thereto.
In some embodiments, the jth vehicle fusion image and the jth+2 vehicle configuration image may be fused based on the pixel position information of the jth vehicle fusion image and the pixel position information of the jth+2 vehicle configuration image, to obtain the jth+1 vehicle fusion image. j is greater than 0 and less than or equal to N-2, N is the total number of vehicle configuration images, and the j+2th vehicle configuration image is the j+2th vehicle configuration image added to the image container according to the preset map layer stacking sequence. The j+1th vehicle fusion image includes all the information of the first to j+1th vehicle configuration images.
For example, if the vehicle configuration image includes three, and the preset map layers of the four vehicle configuration images are stacked in the order of P1, P2, and P3. Firstly, adding P1 into an image container, then putting P2 into the image container, wherein in the image container, a P2 layer is above a P1 layer, compressing P1 and P2 for synthesis to obtain a first vehicle fusion image Q1, and the information of P1 and P2 is contained in the Q1; and then P3 is put into an image container, at the moment, Q1 and P3 exist in the image container, the compression is continuously carried out on the Q1 and the P3, a second vehicle fusion image Q2 is obtained, and at the moment, all information in P1, P2 and P3 is contained in the Q2.
In an embodiment of the present application, a plurality of vehicle configuration images are acquired, each vehicle configuration image including appearance information of one accessory of a vehicle; creating an image container based on pixel information of an image of a preset usage scene; sequentially adding a plurality of vehicle configuration images into an image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image; and outputting the target vehicle image to be displayed in a preset use scene. In this way, according to the preset image layer stacking sequence, each vehicle configuration image is sequentially added into the image container created based on the pixel information of the image of the preset use scene for fusion processing, so that multiple target vehicle images of different vehicle configuration image combinations can be automatically generated, and the output efficiency of the target vehicle images is improved.
Next, an implementation process of the application embodiment in an actual application scenario is described.
In some embodiments, as shown in fig. 3, a flow chart of a method for generating and designing a vehicle group map with high efficiency according to an embodiment of the present application is shown, and the method may be implemented through the following steps S301 to S306, and the method is described below with reference to fig. 3.
S301, a single vehicle pattern diagram (corresponding to the "vehicle configuration image" in the other embodiments) is designed.
The designer splits the whole car assembly diagram in the process of designing the car assembly diagram, and designs a single-style JPEG picture, wherein the single-style JPEG picture comprises a car body color pattern diagram, a hub pattern diagram, a caliper pattern diagram, a front and back light pattern diagram, a front face pattern diagram, a back tail pattern diagram, a seat layout pattern diagram, an interior decoration pattern diagram, a cockpit central control pattern diagram, a tail wing pattern diagram and a skylight pattern diagram.
S302, vehicle configuration rule data (corresponding to "configuration parameters of respective accessories of the vehicle" in other embodiments) is established.
The vehicle configuration rule data may include vehicle length, width, height ratio parameters, whether there are tail wing parameters, four-seat five-seat or seven-seat parameters, interior trim color parameters, hub size and style parameters, caliper color parameters, sunroof styling parameters, and light styling parameters. It should be noted here that the basic data for data entry is also different for different models, e.g., the tail is not available for every vehicle, the four-seat model has no tail, the seven-seat model must be 22 inch hub carriage, etc.
S303, binding the vehicle configuration pattern diagram and the corresponding vehicle configuration parameters to form a one-to-one correspondence (corresponding to the 'association relationship between at least one configuration parameter for creating the same accessory and the corresponding vehicle configuration image' in other embodiments).
And (3) respectively binding the vehicle configuration pattern diagram manufactured in the step (S301) with the vehicle configuration parameters in the step (S302) by a designer to form a one-to-one correspondence. For example, the vehicle appearance parameters bind appearance patterns, the vehicle hub bind hub patterns, the vehicle caliper bind caliper patterns, and so on.
S304, a layer order (corresponding to the "preset layer stacking order" in other embodiments) is configured in the system.
The layers can be overlaid from bottom to top in the image container by a computer program, so that it is necessary to set in the layers which layers consist of the individual car-layout patterns in S301. For example, the body frame style sheet may be referred to as the bottom-most sheet, with the sequence number 1, which we call P1. The caliper style, the body color style and the door style can be used as the upper layer of the P1, the sequence number is 2, and the model is called as the P2 layer. The hub style, tail style, front and rear styles may be used as the upper layer of P2, with the sequence number 3, which we call P3.
S305, performing layer synthesis on each vehicle pattern diagram in the image container, and obtaining a vehicle group diagram (corresponding to the "target vehicle image" in other embodiments).
Layer composition can be performed by a computer program, firstly an image container O is created, and the pixels and the proportions of the image container are set according to different application scenes of the image (such as for posters, for manuals and for Internet marketing of users). According to the sequence number of the image layers, firstly, putting the P1 layer into an image container, then putting the P2 layer into the image container, in the image container, compressing P1 and P2 to synthesize the P2 layer above the P1 layer to obtain a new image layer Q1, wherein the P1 layer and the P2 layer are contained in the Q1 layer, then putting the P3 layer into the image container, at this time, the Q1 layer and the P3 layer are contained in the image container, compressing the Q1 layer and the P3 layer continuously to obtain the Q2 layer, and at this time, the Q2 layer contains all vehicle configuration information in the P1 layer, the P2 layer and the P3 layer (which is equivalent to the ' based on the preset image layer laminating sequence in other embodiments ', and sequentially adding a plurality of vehicle configuration images into the image container to perform fusion processing to obtain a target vehicle image ').
The vehicle group diagram is basically a JPEG, is a 2D picture, and can clearly display vehicle configuration patterns such as vehicle body color, hub patterns, caliper patterns, front face patterns, rear tail patterns and the like.
The group drawing is not understood to be a planar 2D drawing, and is understood to be a group of pictures superimposed on top of each other by a plurality of individual car drawings. Illustratively, as shown in fig. 4, it is assumed that a picture designer provides a vehicle frame diagram 401 on which only a vehicle A, B, C pillar style, and a front-rear impact beam style portion are presented, and on which there is no body color style, no hub style, no door style, no front-rear light style, etc., referred to herein as diagram P1. Next, the designer provides a hub style sheet 501 as shown in fig. 5, where the hub style sheet 501 has no body color, no front-to-back light, no caliper, and only four hub styles, referred to herein as sheet P2. The designer then provides a white body color chart pattern with no body frame pattern, no hub pattern, no front-to-back light pattern, referred to herein as chart P3. The designer then provides a door map style, four left and right doors, without body color style, without hub style, without front-to-back light style, referred to herein as map P4.
Illustratively, as shown in FIG. 6, an image container 601 is created in computer memory. First, the image P1 602 is placed in the image container, where there is only one P1 vehicle frame in the image container, then P2 603 is placed in the container and placed on top of P1 602, where there is the image content in P1 602 and P2 603, then P3 604 is placed in the container and placed on top of P2 603, where there is the image content in P1 602 and P2 603 and P3 604, then P4 605 is placed in the container and placed on top of P3 604, where there is the image content in P1 602 and P2 603 and P3 and P604 and P4 605, and finally P1 602, P2 603, P3, P4 605 are compressed in the image container to the final group map P, so that the group map P presents a white vehicle body (including door, engine cover, trunk), white door, four wheel group map.
S306, the vehicle group map is distributed to the internet (corresponding to the "output target vehicle image" in other embodiments).
The final vehicle group map may be extracted from the image container in S305 by a computer program and distributed to the internet.
It can be understood that, compared with the conventional designer, the efficient vehicle group diagram generating and designing method provided by the embodiment of the application is more efficient in the process from conception, production, adjustment and output of the whole group diagram, the scheme does not need to pay attention to all the processes, only the designer is required to design all the vehicle configuration diagrams in the conception stage, and all the rest processes are automatically completed, adjusted and output by the computer program. Without excessive human involvement, the computer layer is automatically made to perform parameter matching according to step S302, automatically adjust pixels and sizes according to step S305, and control the error result at the pixel level.
In an embodiment of the present application, a plurality of vehicle configuration images are acquired, each vehicle configuration image including appearance information of one accessory of a vehicle; creating an image container based on pixel information of an image of a preset usage scene; sequentially adding a plurality of vehicle configuration images into an image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image; and outputting the target vehicle image to be displayed in a preset use scene. In this way, according to the preset image layer stacking sequence, each vehicle configuration image is sequentially added into the image container created based on the pixel information of the image of the preset use scene for fusion processing, so that multiple target vehicle images of different vehicle configuration image combinations can be automatically generated, and the output efficiency of the target vehicle images is improved.
Fig. 7 is a schematic structural diagram of an image processing apparatus for a vehicle according to an embodiment of the present application, and as shown in fig. 7, an image processing apparatus 700 for a vehicle includes:
a first acquisition module 701 that acquires a plurality of vehicle configuration images each including appearance information of one accessory of the vehicle;
A first creating module 702, configured to create an image container based on pixel information of an image of a preset usage scene; the image container is used for bearing the plurality of vehicle configuration images;
a first fusion processing module 703, configured to sequentially add the plurality of vehicle configuration images to the image container based on a preset image layer stacking sequence to perform fusion processing, so as to obtain a target vehicle image;
the first output module 704 is configured to output the target vehicle image for displaying in the preset usage scenario.
In some embodiments, the pixel information of the image container is the same as the pixel information of the image of the preset usage scene; the image processing apparatus 700 of the vehicle further includes:
a second acquisition module configured to acquire first pixel information of the plurality of vehicle configuration images;
a second creation module for creating second pixel information of each accessory image of the vehicle with respect to the image container based on the pixel information of the image container;
and the first determining module is used for determining pixel position information of the corresponding vehicle configuration image relative to the image container according to the first pixel information and the second pixel information.
In some embodiments, the first pixel information includes a first total number of pixels of the plurality of vehicle configuration images, and the second pixel information includes a second total number of pixels of the respective accessory images and a center pixel point coordinate; the first determining module includes:
a first obtaining sub-module for obtaining a second pixel total number and a center pixel point coordinate of a first accessory image that is the same as the accessory in the ith vehicle configuration image; wherein i is greater than 0 and less than or equal to N, N being the total number of the vehicle configuration images;
an adjustment sub-module, configured to adjust a pixel density of the ith vehicle configuration image if a first pixel total number of the ith vehicle configuration image and a second pixel total number of the first accessory image are different, so that the adjusted pixel total number of the ith vehicle configuration image is the same as the second pixel total number of the first accessory image;
a second obtaining sub-module, configured to obtain pixel density and size information of the i-th vehicle configuration image after adjustment;
a first determining sub-module, configured to determine pixel position information of the ith vehicle configuration image relative to the image container based on the adjusted pixel density and size information of the ith vehicle configuration image and a center pixel point coordinate of the first accessory image.
In some embodiments, the first fusion processing module 703 includes:
the first fusion sub-module is used for fusing the first vehicle configuration image and the second vehicle configuration image based on the pixel position information of the first vehicle configuration image and the pixel position information of the second vehicle configuration image to obtain a first vehicle fusion image; wherein,
the first vehicle configuration image is a first vehicle configuration image added to the image container determined according to the preset map layer stacking order, and the second vehicle configuration image is a second vehicle configuration image added to the image container determined according to the preset map layer stacking order.
In some embodiments, the first fusion processing module 703 further comprises:
the second fusion sub-module is used for fusing the jth vehicle fusion image and the jth+2th vehicle configuration image based on the pixel position information of the jth vehicle fusion image and the pixel position information of the jth+2th vehicle configuration image to obtain the jth+1th vehicle fusion image; wherein,
the j+2th vehicle configuration image is a j+2th vehicle configuration image added to the image container according to the preset image layer stacking sequence, j is greater than 0 and less than or equal to N-2, and N is the total number of the vehicle configuration images.
In some embodiments, the image processing apparatus 700 of the vehicle further includes:
the second determining module is used for determining the visual contact difficulty degree of a user for watching accessories of the vehicle under a preset angle according to the appearance structure of the vehicle;
and the third determining module is used for determining the preset chart layer stacking sequence according to the visual contact difficulty level.
In some embodiments, the image processing apparatus 700 of the vehicle further includes:
the third acquisition module is used for acquiring configuration parameters of various accessories of the vehicle; the configuration parameters are used for describing appearance information of the various accessories;
and the third creation module is used for creating the association relation between at least one configuration parameter of the same accessory and the corresponding vehicle configuration image.
It should be noted that, the description of the image processing apparatus of the vehicle in the embodiment of the present application is similar to the description of the embodiment of the method described above, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. For technical details not disclosed in the embodiments of the present apparatus, please refer to the description of the embodiments of the method of the present application for understanding.
In the embodiment of the present application, if the image processing of the vehicle is implemented in the form of a software functional module, and sold or used as a separate product, the image processing may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or, in part, contributing to the related solutions, embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, the present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing of the vehicle provided in the above embodiment.
The embodiment of the application also provides an image processing device of the vehicle. Fig. 8 is a schematic structural diagram of an image processing apparatus for a vehicle according to an embodiment of the present application, as shown in fig. 8, an image processing apparatus 800 for a vehicle includes: a memory 801, a processor 802, a communication interface 803, and a communication bus 804. Wherein the memory 801 is configured to store image processing instructions of an executable vehicle; the processor 802 is configured to execute the image processing instructions of the executable vehicle stored in the memory, so as to implement the image processing of the vehicle provided in the above embodiment.
The above description of the image processing apparatus and the storage medium embodiments of the vehicle is similar to the description of the method embodiments described above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the image processing apparatus and the storage medium of the vehicle of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising at least one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
One of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or in a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a product to perform all or part of the methods described in the various embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art will easily think about changes or substitutions within the technical scope of the present application, and should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of image processing for a vehicle, the method comprising:
Acquiring a plurality of vehicle configuration images, each vehicle configuration image including appearance information of one accessory of the vehicle;
creating an image container based on pixel information of an image of a preset usage scene; the image container is used for bearing the plurality of vehicle configuration images;
sequentially adding the plurality of vehicle configuration images into the image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image;
and outputting the target vehicle image to be displayed in the preset use scene.
2. The method according to claim 1, wherein the pixel information of the image container is the same as the pixel information of the image of the preset usage scene; the method further comprises the steps of:
acquiring first pixel information of the plurality of vehicle configuration images;
creating second pixel information of each accessory image of the vehicle relative to the image container based on the pixel information of the image container;
and determining pixel position information of the corresponding vehicle configuration image relative to the image container according to the first pixel information and the second pixel information.
3. The method of claim 2, wherein the first pixel information comprises a first total number of pixels of the plurality of vehicle configuration images, and the second pixel information comprises a second total number of pixels of the respective accessory images and a center pixel point coordinate;
The determining pixel position information of the corresponding vehicle configuration image relative to the image container according to the first pixel information and the second pixel information comprises:
acquiring the second pixel total number and the center pixel point coordinates of the first accessory image which are the same as the accessories in the ith vehicle configuration image; wherein i is greater than 0 and less than or equal to N, N being the total number of the vehicle configuration images;
if the total number of the first pixels of the ith vehicle configuration image and the total number of the second pixels of the first accessory image are different, adjusting the pixel density of the ith vehicle configuration image so that the total number of the pixels of the ith vehicle configuration image after adjustment is the same as the total number of the second pixels of the first accessory image;
acquiring pixel density and size information of the i-th vehicle configuration image after adjustment;
and determining pixel position information of the ith vehicle configuration image relative to the image container based on the adjusted pixel density and size information of the ith vehicle configuration image and the central pixel point coordinates of the first accessory image.
4. The method of claim 2, wherein sequentially adding the plurality of vehicle configuration images to the image container for fusion processing comprises:
Fusing the first vehicle configuration image and the second vehicle configuration image based on pixel position information of the first vehicle configuration image and pixel position information of the second vehicle configuration image to obtain a first vehicle fused image; wherein,
the first vehicle configuration image is a first vehicle configuration image added to the image container determined according to the preset map layer stacking order, and the second vehicle configuration image is a second vehicle configuration image added to the image container determined according to the preset map layer stacking order.
5. The method as recited in claim 4, further comprising:
based on the pixel position information of the jth vehicle fusion image and the pixel position information of the jth+2 vehicle configuration image, fusing the jth vehicle fusion image and the jth+2 vehicle configuration image to obtain the jth+1 vehicle fusion image; wherein,
the j+2th vehicle configuration image is a j+2th vehicle configuration image added to the image container according to the preset image layer stacking sequence, j is greater than 0 and less than or equal to N-2, and N is the total number of the vehicle configuration images.
6. The method according to any one of claims 1 to 5, further comprising:
according to the appearance structure of the vehicle, determining the visual contact difficulty degree of a user for watching accessories of the vehicle under a preset angle;
and determining the stacking sequence of the preset map layers according to the visual contact difficulty level.
7. The method according to any one of claims 1 to 5, further comprising:
acquiring configuration parameters of various accessories of the vehicle; the configuration parameters are used for describing appearance information of the various accessories;
and creating an association relationship between at least one configuration parameter of the same accessory and the corresponding vehicle configuration image.
8. An image processing apparatus of a vehicle, comprising:
a first acquisition module configured to acquire a plurality of vehicle configuration images each including appearance information of one accessory of the vehicle;
a first creating module for creating an image container based on pixel information of an image of a preset usage scene; the image container is used for bearing the plurality of vehicle configuration images;
the first fusion processing module is used for sequentially adding the plurality of vehicle configuration images into the image container based on a preset image layer stacking sequence to perform fusion processing to obtain a target vehicle image;
The first output module is used for outputting the target vehicle image so as to display the target vehicle image in the preset use scene.
9. An image processing apparatus of a vehicle, characterized by comprising:
a memory for storing image processing instructions of the executable vehicle;
a processor for implementing the method of any one of claims 1 to 7 when executing image processing instructions of an executable vehicle stored in the memory.
10. A computer readable storage medium, characterized in that image processing instructions of a vehicle are stored for causing a processor to execute the method according to any one of claims 1 to 7.
CN202311788782.5A 2023-12-22 2023-12-22 Image processing method, device and equipment for vehicle and readable storage medium Pending CN117745525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311788782.5A CN117745525A (en) 2023-12-22 2023-12-22 Image processing method, device and equipment for vehicle and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311788782.5A CN117745525A (en) 2023-12-22 2023-12-22 Image processing method, device and equipment for vehicle and readable storage medium

Publications (1)

Publication Number Publication Date
CN117745525A true CN117745525A (en) 2024-03-22

Family

ID=90279261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311788782.5A Pending CN117745525A (en) 2023-12-22 2023-12-22 Image processing method, device and equipment for vehicle and readable storage medium

Country Status (1)

Country Link
CN (1) CN117745525A (en)

Similar Documents

Publication Publication Date Title
DE102017009049A1 (en) Enable transformations sketch-to-painting
US8351689B2 (en) Apparatus and method for removing ink lines and segmentation of color regions of a 2-D image for converting 2-D images into stereoscopic 3-D images
CN106157273B (en) Method and device for generating composite picture
EP2810430B1 (en) Image capture system
US10074193B2 (en) Controlled dynamic detailing of images using limited storage
CN109960872A (en) The virtual soft dress collocation management system of AR and its working method
WO2002013144A1 (en) 3d facial modeling system and modeling method
JP2021516390A (en) Surround view system with adjusted projection plane
EP1922700B1 (en) 2d/3d combined rendering
CN107547803A (en) Video segmentation result edge optimization processing method, device and computing device
CN107369188A (en) The synthetic method and device of image
CN113327316A (en) Image processing method, device, equipment and storage medium
CN106131457A (en) A kind of GIF image generating method, device and terminal unit
CN116485973A (en) Material generation method of virtual object, electronic equipment and storage medium
US20130210520A1 (en) Storage medium having stored therein game program, game apparatus, game system, and game image generation method
CN113127126B (en) Object display method and device
US10497165B2 (en) Texturing of 3D-models of real objects using photographs and/or video sequences to facilitate user-controlled interactions with the models
CN117745525A (en) Image processing method, device and equipment for vehicle and readable storage medium
WO2015102014A1 (en) Texturing of 3d-models using photographs and/or video for use in user-controlled interactions implementation
CN111064905B (en) Video scene conversion method for automatic driving
CN115801983A (en) Image superposition method and device and electronic equipment
DE102019133631A1 (en) IMAGE GENERATION DEVICE AND IMAGE GENERATION METHOD
JP4078926B2 (en) Image generation method and apparatus
CN117193530B (en) Intelligent cabin immersive user experience method and system based on virtual reality technology
US20230230359A1 (en) Method for generating images of a vehicle-interior camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination