CN115205432B - Simulation method and model for automatic generation of cigarette terminal display sample image - Google Patents

Simulation method and model for automatic generation of cigarette terminal display sample image Download PDF

Info

Publication number
CN115205432B
CN115205432B CN202211075129.XA CN202211075129A CN115205432B CN 115205432 B CN115205432 B CN 115205432B CN 202211075129 A CN202211075129 A CN 202211075129A CN 115205432 B CN115205432 B CN 115205432B
Authority
CN
China
Prior art keywords
sample
image
obj
model
counter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211075129.XA
Other languages
Chinese (zh)
Other versions
CN115205432A (en
Inventor
龙涛
杨恒
李轩
邓靖波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aimo Technology Co ltd
Original Assignee
Shenzhen Aimo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aimo Technology Co ltd filed Critical Shenzhen Aimo Technology Co ltd
Priority to CN202211075129.XA priority Critical patent/CN115205432B/en
Publication of CN115205432A publication Critical patent/CN115205432A/en
Application granted granted Critical
Publication of CN115205432B publication Critical patent/CN115205432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The invention discloses a simulation method and a model for automatically generating a cigarette terminal display sample image, relates to the technical field of deep learning, and solves the technical problem that a large number of sample images required by deep learning of certain industries are difficult to quickly obtain by the existing technical scheme. The method comprises S100: acquiring basic data of a real sample image, establishing a basic model, and presetting the number of sample images generated by the basic model; the base model comprises a counter model and a sample object model; s200: randomly distributing a plurality of sample objects generated by the sample object model in a counter generated by the counter model through python to obtain an image to be processed; s300: carrying out coordinate relative conversion processing on the image to be processed to obtain a sample image with coordinate labels; s400: judging whether the number of the sample images reaches a preset value or not; if not, the steps S200 to S400 are repeated; if so, the generation of the sample image is ended. The method is used for rapidly acquiring a large number of sample images required by deep learning in certain industries.

Description

Simulation method and model for automatic generation of cigarette terminal display sample image
Technical Field
The invention relates to the technical field of deep learning, in particular to a simulation method and a simulation model for automatically generating a cigarette terminal display sample image.
Background
Deep learning is to learn the internal rules and the expression levels of sample data, and the final aim is to enable a machine to have the analysis and learning capability like a human and to recognize data such as characters, images and sounds. Deep learning has achieved many achievements in search technology, data mining, machine learning, machine translation, natural language processing, multimedia learning, speech, recommendation and personalization technologies, and other related fields. The deep learning enables the machine to imitate human activities such as audio-visual and thinking, solves a plurality of complex pattern recognition problems, and makes great progress on the artificial intelligence related technology.
In deep learning, training samples are indispensable and most important, the quality of the training samples indirectly determines the effect of a deep learning model, the process of collecting the samples is long and difficult due to the particularity of the tobacco industry, and a method capable of quickly generating sample images for cigarette case display or placement is needed.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
for some industries, the existing technical scheme is difficult to rapidly acquire a large number of sample images required by deep learning.
Disclosure of Invention
The invention aims to provide a simulation method and a model for automatically generating a sample image displayed by a cigarette terminal, which aim to solve the technical problem that aiming at certain industries, the existing technical scheme is difficult to quickly obtain a large number of sample images required by deep learning. The technical effects that can be produced by the preferred technical scheme in the technical schemes provided by the invention are described in detail in the following.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides a simulation method for automatically generating a cigarette terminal display sample image, which comprises the following steps:
s100: acquiring basic data of a real sample image, establishing a basic model, and presetting the number of sample images generated by the basic model; the base model comprises a counter model and a sample object model;
s200: randomly distributing a plurality of sample objects generated by the sample object model in a counter generated by the counter model through python to obtain an image to be processed;
s300: carrying out coordinate relative conversion processing on the image to be processed to obtain the sample image with coordinate labels;
s400: judging whether the number of the sample images reaches a preset value; if not, the steps S200 to S400 are repeated; if so, finishing the generation of the sample image;
the step of S200 the specific process is as follows:
s210: the sample object model and the counter model respectively generate the sample object and the counter;
s220: the sample object and the counter are endowed with volume collision attributes, and gravity information is endowed to the sample object;
s230: setting the length, width and height of the counter and the initial 3D coordinate as (x, y, z), and randomly distributing the sample objects in the counter through python to generate an initial image;
s240: acquiring 3D coordinates of the sample object through a gravity and time sequence algorithm;
s250: and saving the 3D coordinates and the initial image, and outputting the image to be processed.
Preferably, in the step S300, the coordinate relative transformation processing is performed on the image to be processed, and the method specifically includes:
s310: rendering the image to be processed, and setting the position of a 3D camera and the origin of 3D simulation;
s320: converting the 3D coordinate information of the sample object in the 3D space into 2D coordinate information of a 2D image through a coordinate matrix conversion algorithm; the 2D coordinate information is saved as the coordinate annotation of the sample image.
Preferably, in the step S310, before rendering the image to be processed, random illumination brightness is further added to the 3D space.
Preferably, the 3D coordinates of the sample object are all saved in a (obj _ x, obj _ y, obj _ z) list in python.
Preferably, in the step S320, the coordinate matrix conversion algorithm includes a translation matrix formula and a rotation matrix formula.
Preferably, the formula of the translation matrix is:
move_point=original_point-[ obj_x,obj_y,obj_z];
the formula of the rotation matrix is:
r_x=[[1,0,0],[0,cos(obj_x),sin(obj_x)],[0,-sin(obj_x),cos(obj_x)]]
r_y=[[cos(obj_y),0,-sin(obj_y)],[0,1,0],[sin(obj_y),0,cos(obj_y)]]
r_z=[[ cos(obj_z),sin(obj_z),0],[-sin(obj_z),cos(obj_z),0],[0,0,1]];
wherein, the r _ x, r _ y and r _ z are relative coordinate transformation vectors of the sample object from the origin in the 3D space around an x axis, a y axis and a z axis respectively; the move _ point is the moving distance of the sample object from the origin; the original _ point is an origin coordinate; the cos (obj _ x), cos (obj _ y), cos (obj _ z), sin (obj _ x), sin (obj _ y), and sin (obj _ z) are all euler angles of the rotation matrix.
Preferably, the basic data is the shape and size of the sample object and the material and size of the counter; the material comprises glass and wood, and wood grains are distributed on the wood.
Preferably, in the step S100, establishing a basic model specifically includes:
s110: respectively establishing white box models of the sample object and the counter;
s120: generating the sample object model with real texture through 3D simulation according to the basic data and the sample object white box model which are obtained by a real sample picture;
s130: and according to the material and the size of the counter and the white counter box model, carrying out random mapping through 3D simulation to generate the counter model.
In addition, the invention also provides a simulation model for automatically generating the display sample image of the cigarette terminal, wherein the simulation model is used for realizing the simulation method and comprises the following steps:
the acquisition module acquires a real sample image and extracts basic data of the real sample image;
the sample object generating module is used for carrying out 3D simulation according to the real sample image and basic data to generate the sample object with real texture;
the counter generation module is used for carrying out 3D simulation according to the real sample image and basic data and generating the counter after random mapping;
the 3D space random distribution module is used for randomly distributing the sample objects in the counter to obtain an image to be processed;
the sample generation module is used for acquiring 3D coordinate information of the sample object and carrying out coordinate relative conversion processing on the 3D coordinate information to obtain 2D coordinate information; and the 2D coordinate information is stored as the coordinate label of the sample object, and the coordinate label and the image to be processed are output as the sample image.
One of the technical schemes of the invention has the following advantages or beneficial effects:
according to the method, a basic model is established through basic data obtained by a real sample image, and a plurality of sample objects generated by the basic model are randomly distributed in a counter to generate an initial image. And then carrying out coordinate relative transformation processing on the coordinate value of the sample object in the 3D space through python to finally obtain a sample image with 2D coordinate labels. The whole operation process is convenient and fast, the operation efficiency is improved through automatic operation, the simulation of a real scene is realized, and a large number of sample images required by the cigarettes in the deep learning field can be conveniently and fast obtained.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is a flow chart of a method according to a first embodiment of the present invention;
fig. 2 is a detailed flowchart of the step S100 according to the first embodiment of the present invention;
fig. 3 is a detailed flowchart of the step S200 according to the first embodiment of the present invention;
fig. 4 is a detailed flowchart of the step S300 according to the first embodiment of the present invention;
fig. 5 is a schematic structural diagram of a second embodiment of the present invention.
Detailed Description
In order that the objects, aspects and advantages of the present invention will become more apparent, various exemplary embodiments will be described below with reference to the accompanying drawings, which form a part hereof, and in which are shown by way of illustration various exemplary embodiments in which the invention may be practiced. The same numbers in different drawings identify the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. It is to be understood that they are merely examples of processes, methods, apparatus, etc. consistent with certain aspects of the present disclosure as detailed in the appended claims, and that other embodiments may be used or structural and functional modifications may be made to the embodiments set forth herein without departing from the scope and spirit of the present disclosure.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," and the like are used in the orientations and positional relationships illustrated in the accompanying drawings for the purpose of facilitating the description of the present invention and simplifying the description, and do not indicate or imply that the elements so referred to must have a particular orientation, be constructed in a particular orientation, and be operated. The terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. The term "plurality" means two or more. The terms "coupled" and "connected" are to be construed broadly and may include, for example, a fixed connection, a removable connection, a unitary connection, a mechanical connection, an electrical connection, a communicative connection, a direct connection, an indirect connection via intermediate media, and may include, but are not limited to, a connection between two elements or an interactive relationship between two elements. The term "and/or" includes any and all combinations of one or more of the associated listed items. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In order to explain the technical solution of the present invention, the following description is made by way of specific examples, which only show the relevant portions of the embodiments of the present invention.
The first embodiment is as follows:
as shown in fig. 1, the invention provides a simulation method for automatically generating a cigarette terminal display sample image, which comprises the following steps: s100: acquiring basic data of a real sample image, establishing a basic model, and presetting the number of sample images generated by the basic model; the base model comprises a counter model and a sample object model; s200: randomly distributing a plurality of sample objects generated by the sample object model in a counter generated by the counter model through python to obtain an image to be processed; s300: carrying out coordinate relative conversion processing on the image to be processed to obtain a sample image with coordinate labels; s400: judging whether the number of the sample images reaches a preset value or not; if not, the steps S200 to S400 are repeated; if so, the generation of the sample image is ended. Specifically, the method establishes a basic model through basic data acquired by a real sample image, and randomly distributes a plurality of sample objects generated by the basic model in the counter to generate an initial image. And then carrying out coordinate relative transformation processing on the coordinate value of the sample object in the 3D space through python to finally obtain a sample image with 2D coordinate labels. The whole operation process is convenient and fast, the operation efficiency is improved through automatic operation, the simulation of a real scene is realized, and a large number of sample images required by the cigarettes in the deep learning field can be conveniently and fast obtained.
Furthermore, the method firstly needs to input the real sample image into python and extract the basic data in the real sample image. The basic data is the basis of deep learning object modeling, and is mainly the size and contour parameters of the sample object. It should be noted that, in the real sample image, there is a display counter in addition to the object sample. Thus, the underlying data includes the shape, size of the sample object and the material and size of the counter. And establishing a basic model through the extracted basic data, and in the three-dimensional modeling software, establishing a corresponding basic model through the basic data, wherein the specific modeling software can be selected according to the requirement. After the basic model is established, the number of sample images to be generated needs to be preset, and the preset value is set by a user.
The base model includes a counter model and a sample object model, which generate a simulated counter and a simulated sample object, respectively. The number and brand of the simulation objects may be one or more, and are specified by the user. There is no limitation on the shooting angles of the analog counter and the analog sample object. The simulation sample objects are randomly distributed in the simulation counter through python, and then the 3D coordinate information of the simulation sample objects is obtained through the gravity and time sequence algorithm, so that the to-be-processed image with the 3D coordinate information is obtained. And carrying out coordinate relative conversion processing on the image to be processed, namely converting the coordinate values in the 3D space into coordinate values in the 2D space to be used as coordinate labels. And finally outputting the sample image with the 2D coordinate label.
As shown in fig. 4, as an optional embodiment, in the step S300, the coordinate relative transformation processing is performed on the image to be processed, which specifically includes: s310: rendering an image to be processed, and setting the position of a 3D camera and the origin of 3D simulation; s320: converting 3D coordinate information of the sample object in a 3D space into 2D coordinate information of a 2D image through a coordinate matrix conversion algorithm; the 2D coordinate information is saved as a coordinate annotation of the sample image. Specifically, the position of the 3D camera and the origin of the 3D simulation are set through python, and the 3D coordinate information of the simulation sample object in the 3D space compared with the origin of the 3D simulation is converted into the 2D coordinate information of the 2D image through a coordinate matrix conversion algorithm under the shooting angle of the 3D camera, so that the sample image with the 2D coordinate label can be obtained.
As shown in fig. 3, as an alternative embodiment, the specific flow of step S200 is: s210: respectively generating a sample object and a counter by the sample object model and the counter model; s220: the sample object and the counter are endowed with volume collision attributes, and gravity information is endowed to the sample object; s230: setting the length, width and height of a counter and initial 3D coordinates as (x, y, z), and randomly distributing sample objects in the counter through python to generate an initial image; s240: acquiring a 3D coordinate of the sample object through a gravity and time sequence algorithm; s250: and saving the 3D coordinates and the initial image, and outputting the image to be processed. Specifically, after the parameters of the volume collision attribute and the gravity information are set, the basic model can simulate various conditions in a real scene more truly, and a more vivid effect is realized. After the length, width, height and initial coordinates of the counter are set, the simulation counter can be constructed through python, and the simulation sample objects are randomly distributed in the simulation counter. The gravity information is associated with a specific simulation engine, and different simulation engines have different setting algorithms. The timing algorithm is to animate the object, and the process is also associated with the selected simulation engine. The gravity information mainly simulates the physical effect of the sample object, so that the sample object is placed more truly. The python performs the step of storing the coordinate information only when the object confirms that the final position is not changed.
Note that the 3D coordinates of the simulation sample object are 3D coordinates of four vertices of the simulation sample object. When the final positions of the simulation sample object and the simulation counter are determined, according to the shooting angle of the 3D camera, four vertexes of all the simulation sample objects capable of representing the simulation sample object under the shooting angle are extracted, and the 3D coordinates of the four vertexes are used as the positions of the simulation sample objects for storage.
As an optional embodiment, in step S310, before rendering the image to be processed, increasing random illumination brightness for the 3D space is further included. Specifically, the random illumination is added to simulate a scene in which the sample object is located on the counter under different illumination.
As an alternative embodiment, the 3D coordinates of the sample objects are all saved in a (obj _ x, obj _ y, obj _ z) list in python. Specifically, the (obj _ x, obj _ y, obj _ z) list is a data format stored in python, and the 3D coordinates representing the vertices of the sample object are all stored under the (obj _ x, obj _ y, obj _ z) list. Wherein every four consecutive vertex coordinates represent a sample object.
As an alternative embodiment, in step S320, the coordinate matrix conversion algorithm includes a translation matrix formula and a rotation matrix formula. Specifically, the coordinates of the sample object are converted into 3D coordinates and 2D coordinates by a translation matrix formula and a rotation matrix formula.
As an alternative embodiment, the translation matrix formula is:
move_point=original_point-[ obj_x,obj_y,obj_z];
the rotation matrix formula is:
r_x=[[1,0,0],[0,cos(obj_x),sin(obj_x)],[0,-sin(obj_x),cos(obj_x)]]
r_y=[[cos(obj_y),0,-sin(obj_y)],[0,1,0],[sin(obj_y),0,cos(obj_y)]]
r_z=[[ cos(obj_z),sin(obj_z),0],[-sin(obj_z),cos(obj_z),0],[0,0,1]];
the method comprises the following steps that r _ x, r _ y and r _ z are relative coordinate conversion vectors of a sample object in a 3D space from an origin around an x axis, a y axis and a z axis respectively; move _ point is the moving distance of the sample object from the origin; origin _ point is the origin coordinate; cos (obj _ x), cos (obj _ y), cos (obj _ z), sin (obj _ x), sin (obj _ y), and sin (obj _ z) are all the euler angles of the rotation matrix. Specifically, a translation matrix formula is obtained by calculating the 3D coordinates of the origin and the sample object; the rotation matrix formula is calculated from the 3D coordinates (obj _ x, obj _ y, obj _ z) of the sample object.
As an alternative embodiment, the basic data is the shape and size of the sample object and the material and size of the counter; the material includes glass and wood, and wood grain is distributed on the wood. Specifically, the basic data is the shape and size of the sample object and the counter, the shape and size are main parameters of the deep learning object, the model and the deep learning object basically keep the same shape and size to establish a more real model, the shape mainly refers to the profile of the deep learning object in the three directions of length, width and height, and the size refers to the specific size of the specific profile in the three directions of length, width and height. It should be noted that, since some display counters appearing on the cigarette display picture are made of wood and glass, simulation of the corresponding materials is also required. Especially, irregular grains such as wood grains are frequently generated on the wooden counter, and the simulated counter can be subjected to mapping treatment in 3D simulation.
As shown in fig. 2, in the step S100, establishing a basic model specifically includes: s110: respectively establishing white box models of a sample object and a counter; s120: generating a sample object model with real texture through 3D simulation according to basic data acquired by a real sample picture and a sample object white box model; s130: and (4) carrying out random mapping through 3D simulation according to the material and the size of the counter and the white counter box model to generate the counter model. Specifically, after a sample object and a white box model of a counter are generally flattened into a plane, a UV texture map is obtained through automatic processing, pixels of a part except for an outer package are directly scratched in the automatic processing process, and only a part corresponding to the outer package is reserved and is subjected to corresponding conversion processing. The outermost layer of the outer package, i.e., the object, is the portion that can be directly observed by the human eye. The concrete transformation is determined according to the shape and the outer contour of the base model so as to be better matched with the texture contour of the base model.
The embodiment is only a specific example and does not indicate such an implementation of the invention.
Example two:
as shown in fig. 5, the present invention further provides a simulation model for automatically generating an image of a sample displayed at a cigarette terminal, where the simulation model is used to implement the simulation method according to the first embodiment, and the simulation method includes: the acquisition module acquires a real sample image and extracts basic data of the real sample image; the sample object generation module is used for carrying out 3D simulation according to a real sample image and basic data to generate a sample object with real texture; the counter generation module is used for carrying out 3D simulation according to the real sample image and the basic data and generating a counter after random mapping; the 3D space random distribution module is used for randomly distributing the sample objects in the counter to obtain an image to be processed; the sample generation module is used for acquiring 3D coordinate information of a sample object and carrying out coordinate relative conversion processing on the 3D coordinate information to obtain 2D coordinate information; and storing the 2D coordinate information as a coordinate label of the sample object, and outputting the coordinate label and the image to be processed as a sample image. Specifically, the acquisition module processes the real sample image, extracts basic data of the sample object and the counter, and transmits the basic data to the sample object generation module and the counter generation module respectively. And the sample object generating module and the counter generating module respectively generate a simulated sample object and a simulated counter after mapping processing according to the basic data and the real sample image. And the 3D space random distribution module randomly distributes the simulation sample objects in the simulation counter, and finally, the sample generation module performs coordinate relative conversion on the 3D coordinates of the simulation sample objects to generate a sample image with 2D coordinate labels.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (9)

1. A simulation method for automatically generating a cigarette terminal display sample image is characterized by comprising the following steps:
s100: acquiring basic data of a real sample image, establishing a basic model, and presetting the number of sample images generated by the basic model; the base model comprises a counter model and a sample object model;
s200: randomly distributing a plurality of sample objects generated by the sample object model in a counter generated by the counter model through python to obtain an image to be processed;
s300: carrying out coordinate relative conversion processing on the image to be processed to obtain the sample image with a coordinate mark;
s400: judging whether the number of the sample images reaches a preset value; if not, the steps S200 to S400 are repeated; if so, finishing the generation of the sample image;
the specific process of the step S200 is as follows:
s210: the sample object model and the counter model respectively generate the sample object and the counter;
s220: the sample object and the counter are endowed with volume collision attributes, and gravity information is endowed to the sample object;
s230: setting the length, width and height of the counter and the initial 3D coordinate as (x, y, z), and randomly distributing the sample objects in the counter through python to generate an initial image;
s240: acquiring 3D coordinates of the sample object through a gravity and time sequence algorithm;
s250: and saving the 3D coordinates and the initial image, and outputting the image to be processed.
2. The simulation method for automatically generating an image of a cigarette terminal display sample according to claim 1, wherein in the step S300, the coordinate relative transformation processing is performed on the image to be processed, specifically comprising:
s310: rendering the image to be processed, and setting the position of a 3D camera and the origin of 3D simulation;
s320: converting the 3D coordinate information of the sample object in the 3D space into 2D coordinate information of a 2D image through a coordinate matrix conversion algorithm; the 2D coordinate information is saved as the coordinate annotation of the sample image.
3. The simulation method for automatically generating an image of a cigarette terminal display sample according to claim 2, wherein the step S310 further comprises adding random illumination brightness to the 3D space before rendering the image to be processed.
4. The simulation method for automatic generation of cigarette terminal display sample images as claimed in claim 2, wherein the 3D coordinates of the sample objects are all saved in python as a (obj _ x, obj _ y, obj _ z) list.
5. The simulation method for automatically generating an image of a cigarette terminal display sample according to claim 4, wherein in the step S320, the coordinate matrix transformation algorithm comprises a translation matrix formula and a rotation matrix formula.
6. The simulation method for automatic generation of cigarette terminal display sample images according to claim 5,
the formula of the translation matrix is:
move_point=original_point-[obj_x,obj_y,obj_z];
the formula of the rotation matrix is:
r_x=[[1,0,0],[0,cos(obj_x),sin(obj_x)],[0,-sin(obj_x),cos(obj_x)]]
r_y=[[cos(obj_y),0,-sin(obj_y)],[0,1,0],[sin(obj_y),0,cos(obj_y)]]
r_z=[[ cos(obj_z),sin(obj_z),0)],[-sin(obj_z),cos(obj_z),0],[0,0,1]];
wherein, the r _ x, r _ y and r _ z are relative coordinate transformation vectors of the sample object from the origin in the 3D space around an x axis, a y axis and a z axis respectively; the move _ point is the moving distance of the sample object from the origin; the original _ point is an origin coordinate; the cos (obj _ x), cos (obj _ y), cos (obj _ z), sin (obj _ x), sin (obj _ y), and sin (obj _ z) are all euler angles of the rotation matrix.
7. The simulation method for automatically generating a cigarette terminal display sample image according to claim 1, wherein the basic data are the shape and size of the sample object and the material and size of the counter; the material comprises glass and wood, and wood grains are distributed on the wood.
8. The simulation method for automatically generating an image of a cigarette terminal display sample according to claim 7, wherein in the step S100, a basic model is established, which specifically includes:
s110: respectively establishing white box models of the sample object and the counter;
s120: generating the sample object model with real texture through 3D simulation according to the basic data and the sample object white box model which are obtained by a real sample picture;
s130: and according to the material and the size of the counter and the white counter box model, carrying out random mapping through 3D simulation to generate the counter model.
9. A simulation model for automatic generation of cigarette terminal display sample images, wherein the simulation model is used for realizing the simulation method according to any one of claims 1-8, and comprises the following steps:
the acquisition module acquires a real sample image and extracts basic data of the real sample image;
the sample object generation module is used for carrying out 3D simulation according to the real sample image and basic data to generate the sample object with real texture;
the counter generation module is used for carrying out 3D simulation according to the real sample image and basic data and generating the counter after random mapping;
the 3D space random distribution module is used for randomly distributing the sample objects in the counter to obtain an image to be processed;
the sample generation module is used for acquiring 3D coordinate information of the sample object and carrying out coordinate relative conversion processing on the 3D coordinate information to obtain 2D coordinate information; and the 2D coordinate information is stored as the coordinate label of the sample object, and the coordinate label and the image to be processed are output as the sample image.
CN202211075129.XA 2022-09-03 2022-09-03 Simulation method and model for automatic generation of cigarette terminal display sample image Active CN115205432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211075129.XA CN115205432B (en) 2022-09-03 2022-09-03 Simulation method and model for automatic generation of cigarette terminal display sample image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211075129.XA CN115205432B (en) 2022-09-03 2022-09-03 Simulation method and model for automatic generation of cigarette terminal display sample image

Publications (2)

Publication Number Publication Date
CN115205432A CN115205432A (en) 2022-10-18
CN115205432B true CN115205432B (en) 2022-11-29

Family

ID=83573569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211075129.XA Active CN115205432B (en) 2022-09-03 2022-09-03 Simulation method and model for automatic generation of cigarette terminal display sample image

Country Status (1)

Country Link
CN (1) CN115205432B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601631B (en) * 2022-12-15 2023-04-07 深圳爱莫科技有限公司 Cigarette display image recognition method, system, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858539A (en) * 2019-01-24 2019-06-07 武汉精立电子技术有限公司 A kind of ROI region extracting method based on deep learning image, semantic parted pattern
CN114049536A (en) * 2021-11-17 2022-02-15 广西中烟工业有限责任公司 Virtual sample generation method and device, storage medium and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636707B (en) * 2013-11-07 2018-03-23 同方威视技术股份有限公司 The method of automatic detection cigarette
CN110309737A (en) * 2019-06-14 2019-10-08 广州图匠数据科技有限公司 A kind of information processing method applied to cigarette sales counter, apparatus and system
JP2022505998A (en) * 2019-10-15 2022-01-17 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド Augmented reality data presentation methods, devices, electronic devices and storage media
US20210182950A1 (en) * 2019-12-16 2021-06-17 Myntra Designs Private Limited System and method for transforming images of retail items
CN112116582A (en) * 2020-09-24 2020-12-22 深圳爱莫科技有限公司 Cigarette detection and identification method under stock or display scene
US11593870B2 (en) * 2020-10-28 2023-02-28 Shopify Inc. Systems and methods for determining positions for three-dimensional models relative to spatial features
JP7170074B2 (en) * 2021-02-01 2022-11-11 株式会社スクウェア・エニックス VIRTUAL STORE MANAGEMENT PROGRAM, VIRTUAL STORE MANAGEMENT SYSTEM AND VIRTUAL STORE MANAGEMENT METHOD
CN113963127B (en) * 2021-12-22 2022-03-15 深圳爱莫科技有限公司 Simulation engine-based model automatic generation method and processing equipment
CN114155374B (en) * 2022-02-09 2022-04-22 深圳爱莫科技有限公司 Ice cream image training method, detection method and processing equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858539A (en) * 2019-01-24 2019-06-07 武汉精立电子技术有限公司 A kind of ROI region extracting method based on deep learning image, semantic parted pattern
CN114049536A (en) * 2021-11-17 2022-02-15 广西中烟工业有限责任公司 Virtual sample generation method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN115205432A (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN110111236B (en) Multi-target sketch image generation method based on progressive confrontation generation network
CN101156175B (en) Depth image-based representation method for 3d object, modeling method and apparatus, and rendering method and apparatus using the same
RU2215326C2 (en) Image-based hierarchic presentation of motionless and animated three-dimensional object, method and device for using this presentation to visualize the object
US8933928B2 (en) Multiview face content creation
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN106778628A (en) A kind of facial expression method for catching based on TOF depth cameras
US10984610B2 (en) Method for influencing virtual objects of augmented reality
CN106652015B (en) Virtual character head portrait generation method and device
CN108305327A (en) A kind of image rendering method
CN115205432B (en) Simulation method and model for automatic generation of cigarette terminal display sample image
CN109711472B (en) Training data generation method and device
CN106652037B (en) Face mapping processing method and device
WO2022089143A1 (en) Method for generating analog image, and electronic device and storage medium
CN113657357B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113297701B (en) Simulation data set generation method and device for multiple industrial part stacking scenes
JP2008140385A (en) Real-time representation method and device of skin wrinkle at character animation time
US10755476B2 (en) Image processing method and image processing device
CN112598768B (en) Method, system and device for disassembling strokes of Chinese characters with common fonts
CN113963127B (en) Simulation engine-based model automatic generation method and processing equipment
CN113240790A (en) Steel rail defect image generation method based on 3D model and point cloud processing
CN114972601A (en) Model generation method, face rendering device and electronic equipment
CN115578236A (en) Pose estimation virtual data set generation method based on physical engine and collision entity
CN111476235B (en) Method for synthesizing 3D curved text picture
CN116958332B (en) Method and system for mapping 3D model in real time of paper drawing based on image recognition
CN112926614A (en) Box labeling image expansion method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant