CN112348737A - Method for generating simulation image, electronic device and storage medium - Google Patents

Method for generating simulation image, electronic device and storage medium Download PDF

Info

Publication number
CN112348737A
CN112348737A CN202011175323.6A CN202011175323A CN112348737A CN 112348737 A CN112348737 A CN 112348737A CN 202011175323 A CN202011175323 A CN 202011175323A CN 112348737 A CN112348737 A CN 112348737A
Authority
CN
China
Prior art keywords
area
image
commodity
migration
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011175323.6A
Other languages
Chinese (zh)
Other versions
CN112348737B (en
Inventor
王文琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN202011175323.6A priority Critical patent/CN112348737B/en
Publication of CN112348737A publication Critical patent/CN112348737A/en
Priority to PCT/CN2021/121846 priority patent/WO2022089143A1/en
Application granted granted Critical
Publication of CN112348737B publication Critical patent/CN112348737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The embodiment of the invention relates to the field of image processing, and discloses a method for generating a simulation image, electronic equipment and a storage medium. The method for generating the simulated image comprises the following steps: acquiring a style migration area in an initial simulation image; extracting a commodity area and a background area where a simulated commodity image is located from the initial simulated image, wherein the background area is an image obtained by deleting the extracted commodity area from the initial simulated image; generating a migration image of the commodity area according to a style migration model corresponding to the style migration area and the commodity area in the style migration area; and placing the migration image at the position of the commodity area in the background area to generate a target simulation image. By adopting the embodiment of the application, the difference between the generated target simulation image and the target actual image is small.

Description

Method for generating simulation image, electronic device and storage medium
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a method for generating a simulation image, electronic equipment and a storage medium.
Background
Along with the development of artificial intelligence technology, intelligent container systems capable of automatically identifying commodities have appeared. The intelligent container system shoots the images of the commodities in the container through one or more cameras arranged in the container, the shot images of the commodities are transmitted to the server, and the type and the quantity of the commodities in the container are recognized and calculated in real time through a recognition algorithm of the server.
The identification of the commodities in the intelligent container is realized based on the visual identification technology of deep learning. The accurate identification based on the deep learning technology needs to include a large number of training data sets for support, and the more data for training included in the training data sets, the more accurate the training result. Usually, training data is obtained by manual collection and marking; the manual cost for collecting and marking data is high, and the collecting and marking time is long. For example, in Unity3D, mapping from a real environment to a virtual environment is established, including simulation of information such as camera parameters, lighting, scene layout, and 3D models, and a large number of simulated images are generated by combining a domain randomization (domain randomization) technique.
However, a large number of background areas such as containers are introduced in the process of generating the simulated image, and the influence of the simulated container environment on the simulated goods is different from the influence of the actual environment on the goods, so that the difference between the generated simulated image data of the goods and the manually acquired actual image data is large.
Disclosure of Invention
An object of embodiments of the present invention is to provide a method, an electronic device, and a storage medium for generating a simulated image, which make a difference between a generated target simulated image and a target actual image small.
To solve the above technical problem, an embodiment of the present invention provides a method for generating a simulation image, including: acquiring a style migration area in an initial simulation image; extracting a commodity area and a background area where a simulated commodity image is located from the initial simulated image, wherein the background area is an image obtained by deleting the extracted commodity area from the initial simulated image; generating a migration image of the commodity area according to a style migration model corresponding to the style migration area and the commodity area in the style migration area; and placing the migration image at the position of the commodity area in the background area to generate a target simulation image.
Embodiments of the present invention also provide an apparatus for simulating image generation, including: the system comprises an acquisition module, an extraction module, a migration module and an image generation module; the acquisition module is used for acquiring a style migration area in the initial simulation image; the extraction module is used for extracting a commodity area where a simulated commodity image is located and a background area from the initial simulated image, wherein the background area is an image obtained by deleting the extracted commodity area from the initial simulated image; the migration module is used for generating a migration image of the commodity area according to a style migration model corresponding to the style migration area and the commodity area in the style migration area; the image generation module is used for placing the migration image at the position of the commodity area in the background area to generate a target simulation image.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of simulated image generation described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method of simulating image generation.
In the embodiment of the application, the initial simulation image comprises a plurality of style migration areas, each style migration area is provided with a style migration model corresponding to the style migration area, the migration image of the commodity area can be generated according to the commodity area in the style migration area and the style migration model corresponding to the style migration area, and because the initial simulation image is divided into the style migration areas and each style migration area is provided with the corresponding style migration model, the migration images of the commodity areas in the same style migration area have the same style, and the accuracy of the generated migration image is improved; because the style migration model is obtained based on actual target sample image training, the difference between the generated migration image and the actually acquired image of the commodity area is reduced through the migration of the style migration model; in addition, the commodity region is subjected to image migration instead of directly carrying out style migration on the whole initial simulation image, so that the influence of an unnecessary background region introduced into the whole image is reduced, for example, container images are also subjected to the same style migration in the style migration process; further reducing the style migration of unnecessary images and reducing the difference between the target simulation image and the target actual image; the accuracy of the model trained by using the target simulation image is improved.
Additionally, acquiring at least one style migration region in the initial simulated image comprises: dividing the target actual image into N target migration areas according to a preset dividing condition, wherein N is an integer greater than 1; and dividing the initial simulation image into N style migration areas according to the size data of each target migration area. Dividing the target actual image into target migration areas, dividing the initial simulation image according to the target migration areas instead of randomly dividing, so that the divided style migration areas correspond to the target migration areas, and further the accuracy of the subsequently generated target simulation image is improved.
In addition, before generating a migration image of the commodity region according to the style migration model corresponding to the style migration region and the commodity region in the style migration region, the method further includes: the process of training the style migration model corresponding to each style migration area is as follows: acquiring a sample target migration area from a preset target sample image and acquiring a sample style migration area corresponding to the sample target migration area from a preset simulation sample image, wherein the sample style migration area has the same image style as the style migration area; extracting a target commodity area where the commodity image is located from the target sample image; extracting a simulated commodity area where the commodity image is located from the simulated sample image; and generating a style migration model corresponding to the style migration area according to the target commodity area in the sample target migration area, the simulated commodity area in the sample style migration area and the network structure of style migration. Before the migration image is generated, corresponding areas of a preset target sample image and a preset simulation sample image are divided, and style migration models are trained by using the target commodity areas and the simulation commodity areas in the corresponding areas, so that the accuracy of training of each style migration model is improved.
In addition, before generating a migration image of the commodity region according to the style migration model corresponding to the style migration region and the commodity region in the style migration region, the method further includes: acquiring size data of each commodity area and size data of the style migration area; and searching the commodity area in the style migration area from the initial simulation image according to the size data of each commodity area and the size data of the style migration area.
In addition, according to the coordinate data of each commodity area and the size data of the style transition area, searching the commodity area in the style transition area from the initial simulation image, including: acquiring a central point coordinate of the commodity area; and if the center point coordinate is located in the style migration area, determining that the commodity area is located in the style migration area.
In addition, the size data of the style migration area includes: a width and a height of the style migration area; the size data of the commodity area includes: the vertex coordinates of the commodity region, the width and the height of the commodity region; the acquiring of the center point coordinates of the commodity area includes: taking a half of the sum of the abscissa and the width of the vertex coordinate as the abscissa in the center point coordinate; and taking a half of the sum of the ordinate of the vertex coordinate and the height as the ordinate of the center point coordinate.
In addition, according to a preset dividing condition, dividing the target actual image into N target migration areas, including: dividing the target actual image into N target migration areas according to the illumination intensity in the target actual image and a preset illumination intensity range; or dividing the target actual image into N target migration areas according to the distortion characteristics in the target actual image. And various modes for dividing the target migration area are provided, so that the division is more flexible.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a method of simulating image generation according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a method of simulating image generation according to a second embodiment of the present invention;
FIG. 3 is a schematic illustration of an actual image of a target provided in accordance with a second embodiment of the present invention;
FIG. 4 is a target actual image including two target migration areas provided in accordance with a second embodiment of the present invention;
FIG. 5 is a schematic illustration of a merchandise area and a background area provided in accordance with a second embodiment of the present invention;
FIG. 6 is a schematic illustration of a simulated image of an object provided in accordance with a second embodiment of the invention;
FIG. 7 is a flow chart of a method of simulating image generation according to a third embodiment of the present invention;
fig. 8 is a block diagram showing a configuration of an apparatus for simulating image generation according to a fourth embodiment of the present invention;
fig. 9 is a block diagram of an electronic apparatus provided in a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The inventor finds that style migration is currently performed on the whole image, but the aim in the commodity identification algorithm is to accurately identify commodities, and when the commodities in the image are fewer, the background area in the image will also perform migration in the same style as the commodities, which will result in an increased difference between the image subjected to migration and the actually acquired image.
A first embodiment of the present invention relates to a method for generating a simulation image, the flow of which is shown in fig. 1, and the method includes:
step 101: at least one style migration region in the initial simulated image is acquired.
Step 102: and extracting a commodity area and a background area where the simulated commodity image is located from the initial simulated image, wherein the background area is an image obtained by deleting the extracted commodity area from the initial simulated image.
Step 103: and generating a migration image of the commodity area according to the style migration model corresponding to the style migration area and the commodity area in the style migration area, wherein the style migration model is obtained based on actual target sample image training.
Step 104: and placing the migration image at the position of the commodity area in the background area to generate a target simulation image.
In the embodiment of the application, the initial simulation image comprises a plurality of style migration areas, each style migration area is provided with a style migration model corresponding to the style migration area, the migration image of the commodity area can be generated according to the commodity area in the style migration area and the style migration model corresponding to the style migration area, and because the initial simulation image is divided into the style migration areas and each style migration area is provided with the corresponding style migration model, the migration images of the commodity areas in the same style migration area have the same style, and the accuracy of the generated migration image is improved; because the style migration model is obtained based on actual target sample image training, the difference between the generated migration image and the actually acquired image of the commodity area is reduced through the migration of the style migration model; in addition, the commodity region is subjected to image migration instead of directly carrying out style migration on the whole initial simulation image, so that the influence of an unnecessary background region introduced into the whole image is reduced, for example, container images are also subjected to the same style migration in the style migration process; further reducing the style migration of unnecessary images and reducing the difference between the target simulation image and the target actual image; the accuracy of the model trained by using the target simulation image is improved.
A second embodiment of the present invention relates to a method for generating a simulated image, which is specifically described in the first embodiment, and the method for generating a simulated image is applied to an electronic device, and the flow of which is shown in fig. 2.
Step 201: and dividing the target actual image into N target migration areas according to a preset dividing condition, wherein N is an integer greater than 1.
Specifically, a target actual image may be acquired in advance, an initial simulation image may be generated according to the size of the target actual image and the acquired actual scene model, and the initial simulation image may be generated according to size data of the target actual image by simulating an actual scene of the acquired image.
The method of simulated image generation in this example may be applied to a variety of application scenarios, for example, may be used to generate a target simulated image containing a commodity for subsequent training of a commodity recognition model. In this example, the initial simulation image may be generated through a virtual environment, such as generating an intelligent container in a virtual scene, and simulating shooting the simulated goods in the intelligent container by setting the illumination and camera parameters to obtain the initial simulation image. The initial simulated image includes a plurality of simulated merchandise.
In one example, dividing the target actual image into N target migration areas according to the illumination intensity in the target actual image and a preset illumination intensity range; or dividing the target actual image into N target migration areas according to the distortion characteristics in the target actual image.
Specifically, the illumination intensity in the target actual image is correlated with the position of the lamp, and the closer the distance to the lamp, the higher the illumination intensity. Based on the principle, the illumination intensity at the preset distance from the lamp can be selected as the illumination intensity threshold, the illumination intensity vertically below the lamp is used as the strongest illumination, and the illumination at the farthest position from the lamp is used as the minimum illumination, so that a plurality of illumination intensity thresholds can be obtained according to actual needs, and N illumination intensity ranges can be obtained according to the obtained plurality of illumination intensity thresholds, the strongest illumination and the minimum illumination. And acquiring the illumination intensity of each designated position in the target actual image, and dividing the target actual image into N target migration areas according to the set N illumination intensity ranges and the illumination intensity of each designated position.
For example, the image shown in fig. 3 is a target actual image, the target actual image is an acquired image of a layer of container, a mark f in fig. 3 indicates a fill light, a fisheye camera is used in this example, the largest circular area is an area shot by the fisheye camera, and a square area circumscribed by the circular area is a frame of the whole target actual image; the specified positions are a point A, a point B and a point C respectively, the strongest illumination is dmax, the minimum illumination is dmin, the illumination intensity of the position which is horizontally 10cm away from the light supplement lamp is d1, the periphery of the light supplement lamp is arranged in a surrounding mode, so that the position of the central point is the minimum illumination, and the illumination intensity ranges can be obtained by taking d1 as an illumination threshold value, namely the illumination intensity range 1[ dmin, d1] and the illumination intensity range 2[ d1, dmax ]; the illumination intensity of the point a is greater than d1, the illumination intensity of the point B and the illumination intensity of the point C are both less than d1, and the distance between the point B and the center point can be used as a radius according to the set two illumination intensity ranges and the illumination intensity of the specified point, so that the target migration area 1 and the target migration area 2 shown in fig. 3 are obtained. For another example, only one fill light lamp is provided, and the preset illumination intensity is set to D2, so that a D region larger than D2 and a C region smaller than D2 are obtained, as shown in fig. 4.
In another example, the division may be performed according to the degree of image distortion, for example, if the image captured by using the fisheye lens is severely distorted at the position farthest from the fisheye camera, the division may be performed according to the distance between the fisheye camera and the image.
Step 202: and dividing the initial simulation image into N style migration areas according to the size data of each target migration area.
Specifically, coordinate data of the target migration area may be acquired, and as shown in fig. 4, coordinates of the D area are represented as (x, y, w, h), where x represents an abscissa of the O point position of the D area, and y represents an ordinate of the O point of the D area. w represents the width of the D region, and h represents the height of the D region. The initial simulation image can be divided according to the coordinate data of the target migration area to obtain a corresponding style migration area. Because the size of the target actual image is the same as that of the initial simulation image, the initial simulation image can be placed in a unified coordinate system, and the corresponding style transition region can be obtained in the initial simulation image according to the coordinate data of the target transition region, for example, in the unified coordinate system, according to the coordinates (x, y, w, h) of the D region, the D 'point with the same coordinates can be searched in the unified coordinate system, and according to the width and height of the target transition region, the corresponding style transition region D' can be obtained.
Step 201 to step 202 are specific descriptions of step 101 in the first embodiment.
Step 203: acquiring coordinate data of each commodity area and size data of the style migration area;
step 204: and searching the commodity area in the style transition area from the initial simulation image according to the size data of each commodity area and the size data of the style transition area.
Specifically, the following processing may be performed for each commodity area in turn: size data of the commodity area may be acquired, and the size data of the commodity area may include coordinates of a boundary point of the commodity area, and a width and a length of the commodity area. Meanwhile, the size data of the style migration area includes coordinate, width, and length data of the style migration area. A plurality of different boundary points of the commodity area can be acquired, whether the acquired boundary points are all located in the style migration area is sequentially judged, and if the boundary points are all located in the style migration area, the commodity area is determined to be located in the style migration area; if the boundary point outside the style transition area exists, whether the commodity area is located in the next style transition area or not can be continuously judged until all the style transition areas are detected or the style transition area to which the commodity area belongs is determined.
Step 205: and extracting a commodity area and a background area where the simulated commodity image is located from the initial simulated image, wherein the background area is an image obtained by deleting the extracted commodity area from the initial simulated image.
Specifically, a commodity region where the simulated commodity image is located may be extracted from the initial simulated image according to the annotation information, the commodity region includes a corresponding complete commodity image, and for convenience of extraction, the commodity region may be set to be rectangular, as shown in fig. 5, the position of the commodity region after the commodity region is extracted from the dashed box region, the extracted commodity region is a 1-a 5, and the region obtained after the commodity region is extracted is the image including the dashed box region in fig. 5.
Step 206: and generating a migration image of the commodity area according to the style migration model corresponding to the style migration area and the commodity area in the style migration area, wherein the style migration model is obtained based on actual target sample image training.
Specifically, before style migration, a style migration model corresponding to the style migration area needs to be acquired, and the style migration model can be acquired according to the acquired target sample image and the simulation sample image.
The process of training the style migration model corresponding to the style migration area is as follows: acquiring a sample target migration area from a preset target sample image and acquiring a sample style migration area corresponding to the sample target migration area from a preset simulation sample image, wherein the sample style migration area has the same image style as the style migration area, and the image style can be oil painting style, the style of a scene surrounding a fill light, the style of a fish eye collection style and the like; extracting a target commodity area where the commodity image is located from the target sample image; extracting a simulated commodity area where the commodity image is located from the simulated sample image; and generating a style migration model corresponding to the style migration area according to the target commodity area located in the sample target migration area, the simulated commodity area located in the sample style migration area and the network structure of style migration.
Specifically, a plurality of preset target sample images are obtained, a sample target migration area in the target sample image is obtained, the target sample image can be divided according to the illumination condition or the image distortion condition to obtain a sample target migration area in the target sample image, the simulated sample image is correspondingly divided according to the size data of the sample target migration area to obtain a sample style migration area corresponding to the sample target migration area. The annotation information may be used to extract a target commodity region containing each commodity from the target sample image and to extract a simulated commodity region containing each commodity from the simulated sample image. And searching a target commodity area in the sample target migration area, and searching a simulated commodity area in the sample style migration area. Because the sample target migration area corresponds to the sample style migration area, the style of the simulated commodity area needs to be migrated to the style of the target commodity area. The style migration model of the style migration area can be obtained by training the target commodity area and the simulated commodity area according to the style migration grid structure. Similarly, the style migration models corresponding to other style migration areas are trained in a similar manner, and will not be described herein again.
And inputting the commodity area in the style migration area into a corresponding style migration model to obtain a migration image of the commodity area.
Step 207: and placing the migration image at the position of the commodity area in the background area to generate a target simulation image.
The migration image is pasted back into the corresponding background area to generate the target simulation image, for example, the C1 area in the background area indicates that the commodity area is extracted, and the corresponding migration image is placed in the C1 area, as shown in fig. 6.
The third embodiment of the present invention relates to a method of generating a simulated image, which is described in detail with reference to step 204. An implementation of finding a merchandise region located within a style transition region from an initial simulated image may be as shown in fig. 7.
Step 301: and acquiring the coordinates of the center point of the commodity area.
In one example, the size data for the style migration area includes: width and height of the style migration area; the size data of the commodity area includes: width and height of the merchandise area. Taking a half of the sum of the abscissa and the width of the vertex coordinate as the abscissa in the central point coordinate; the ordinate in the center point coordinate is set to be half of the sum of the ordinate of the vertex coordinate and the height.
Specifically, coordinate data of the target migration area may be acquired, and as shown in fig. 4, coordinates of the D area are represented as (x, y, w, h), where x represents an abscissa of the O point position of the D area, and y represents an ordinate of the O point of the D area. w represents the width of the D region, and h represents the height of the D region. In this example, in order to facilitate extraction of the commodity region, the commodity region is set to be rectangular in shape, and the size data of the commodity region may include the vertex coordinates of the commodity region, the width and the height of the commodity region. Taking a half of the sum of the abscissa and the width of the vertex coordinate as the abscissa in the central point coordinate; the ordinate in the center point coordinate is set to be half of the sum of the ordinate of the vertex coordinate and the height. For example, as shown in fig. 8, the rectangular box is represented as a product area, and the coordinates of the vertex a of the product area are represented as (x, y); calculating the coordinates of the center point of the commodity area as follows: (C _ Di _ x, C _ Di _ y): c _ Di _ x ═ x + w)/2; c _ Di _ y is (y + h)/2.
Step 302: and if the center point coordinate is located in the style migration area, determining that the commodity area is located in the style migration area.
Specifically, whether the coordinate of the center point is located in the style migration area is judged, if yes, the commodity area is determined to be located in the style migration area, otherwise, the commodity area is determined not to be located in the style migration area, and whether the commodity area is located in other style migration areas is continuously judged.
In this example, a manner of quickly determining whether the commodity area is located in the style transition area is provided, and the determination speed is high.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fourth embodiment of the present invention relates to a simulated image generation apparatus 40 including: an acquisition module 401, an extraction module 402, a migration module 403, and an image generation module 404. The specific structure of the analog image generating apparatus 40 is shown in fig. 8. The obtaining module 401 is configured to obtain a style migration area in the initial simulation image; the extracting module 402 is configured to extract a commodity region where the simulated commodity image is located and a background region from the initial simulated image, where the background region is an image obtained by deleting the extracted commodity region from the initial simulated image; the migration module 403 is configured to generate a migration image of the commodity area according to the style migration model corresponding to the style migration area and the commodity area in the style migration area; the image generation module 404 is configured to place the migration image at a position of the commodity region in the background region, and generate a target simulation image.
It should be understood that this embodiment is an example of the apparatus corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fifth embodiment of the present invention relates to an electronic device, which has a specific structure as shown in fig. 9 and includes at least one processor 501; and a memory 502 communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of generating training data as in the first embodiment or the second embodiment.
The memory 502 and the processor 501 are connected by a bus, which may include any number of interconnected buses and bridges that link one or more of the various circuits of the processor 501 and the memory 502. The bus may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 501 is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor 501.
The processor 501 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 502 may be used to store data used by processor 501 in performing operations.
A sixth embodiment of the present invention relates to a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method of generating an ultrasound image.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A method of simulating image generation, comprising:
acquiring at least one style migration area in the initial simulation image;
extracting a commodity area and a background area where a simulated commodity image is located from the initial simulated image, wherein the background area is an image obtained by deleting the extracted commodity area from the initial simulated image;
generating a migration image of the commodity area according to a style migration model corresponding to the style migration area and the commodity area in the style migration area, wherein the style migration model is obtained based on actual target sample image training;
and placing the migration image at the position of the commodity area in the background area to generate a target simulation image.
2. The method of simulated image generation as claimed in claim 1, wherein said obtaining at least one style migration region in the initial simulated image comprises:
dividing the target actual image into N target migration areas according to a preset dividing condition, wherein N is an integer greater than 1;
and dividing the initial simulation image into N style migration areas according to the size data of each target migration area.
3. The method of simulating image generation according to claim 2, wherein before generating the migration image of the commodity region according to the style migration model corresponding to the style migration region and the commodity region in the style migration region, the method further comprises:
the process of training the style migration model corresponding to each style migration area is as follows:
acquiring a sample target migration area from a preset target sample image and acquiring a sample style migration area corresponding to the sample target migration area from a preset simulation sample image, wherein the sample style migration area has the same image style as the style migration area;
extracting a target commodity area where the commodity image is located from the target sample image;
extracting a simulated commodity area where the commodity image is located from the simulated sample image;
and generating a style migration model corresponding to the style migration area according to the target commodity area in the sample target migration area, the simulated commodity area in the sample style migration area and the network structure of style migration.
4. The method of simulating image generation according to claim 2, wherein before generating the migration image of the commodity region according to the style migration model corresponding to the style migration region and the commodity region in the style migration region, the method further comprises:
acquiring size data of each commodity area and size data of the style migration area;
and searching the commodity area in the style migration area from the initial simulation image according to the size data of each commodity area and the size data of the style migration area.
5. The method of simulated image generation as claimed in claim 4, wherein said searching for a commodity region located within said style transition region from said initial simulated image based on coordinate data of each commodity region and size data of said style transition region comprises:
acquiring a central point coordinate of the commodity area;
and if the center point coordinate is located in the style migration area, determining that the commodity area is located in the style migration area.
6. The method of simulating image generation of claim 2, wherein the size data of the style migration area comprises: a width and a height of the style migration area; the size data of the commodity area includes: the vertex coordinates of the commodity region, the width and the height of the commodity region;
the acquiring of the center point coordinates of the commodity area includes:
taking a half of the sum of the abscissa and the width of the vertex coordinate as the abscissa in the center point coordinate;
and taking a half of the sum of the ordinate of the vertex coordinate and the height as the ordinate of the center point coordinate.
7. The method for simulating image generation according to claim 2, wherein the dividing the target actual image into N target migration areas according to the preset dividing condition includes:
dividing the target actual image into N target migration areas according to the illumination intensity in the target actual image and a preset illumination intensity range; alternatively, the first and second electrodes may be,
and dividing the target actual image into N target migration areas according to the distortion characteristics in the target actual image.
8. An apparatus for simulating image generation, comprising: the system comprises an acquisition module, an extraction module, a migration module and an image generation module;
the acquisition module is used for acquiring a style migration area in the initial simulation image;
the extraction module is used for extracting a commodity area and a background area where a simulated commodity image is located from the initial simulated image, wherein the background area is an image obtained by deleting the extracted commodity area from the initial simulated image;
the migration module is used for generating a migration image of the commodity area according to a style migration model corresponding to the style migration area and the commodity area in the style migration area;
the image generation module is used for placing the migration image at the position of the commodity area in the background area to generate a target simulation image.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of simulated image generation as claimed in any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method of simulating image generation according to any one of claims 1 to 7.
CN202011175323.6A 2020-10-28 2020-10-28 Method for generating simulation image, electronic device and storage medium Active CN112348737B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011175323.6A CN112348737B (en) 2020-10-28 2020-10-28 Method for generating simulation image, electronic device and storage medium
PCT/CN2021/121846 WO2022089143A1 (en) 2020-10-28 2021-09-29 Method for generating analog image, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011175323.6A CN112348737B (en) 2020-10-28 2020-10-28 Method for generating simulation image, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112348737A true CN112348737A (en) 2021-02-09
CN112348737B CN112348737B (en) 2023-03-24

Family

ID=74355227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011175323.6A Active CN112348737B (en) 2020-10-28 2020-10-28 Method for generating simulation image, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN112348737B (en)
WO (1) WO2022089143A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469876A (en) * 2021-07-28 2021-10-01 北京达佳互联信息技术有限公司 Image style migration model training method, image processing method, device and equipment
WO2022089143A1 (en) * 2020-10-28 2022-05-05 达闼机器人有限公司 Method for generating analog image, and electronic device and storage medium
WO2022206158A1 (en) * 2021-03-31 2022-10-06 商汤集团有限公司 Image generation method and apparatus, device, and storage medium
CN117152541A (en) * 2023-10-27 2023-12-01 浙江由由科技有限公司 Fresh commodity identification method combining space transformation with illuminance migration and result verification

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829849A (en) * 2019-01-29 2019-05-31 深圳前海达闼云端智能科技有限公司 A kind of generation method of training data, device and terminal
CN110830706A (en) * 2018-08-08 2020-02-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111783525A (en) * 2020-05-20 2020-10-16 中国人民解放军93114部队 Aerial photographic image target sample generation method based on style migration

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872399B2 (en) * 2018-02-02 2020-12-22 Nvidia Corporation Photorealistic image stylization using a neural network model
CN109447137B (en) * 2018-10-15 2022-06-14 聚时科技(上海)有限公司 Image local style migration method based on decomposition factors
CN110490960B (en) * 2019-07-11 2023-04-07 创新先进技术有限公司 Synthetic image generation method and device
CN112348737B (en) * 2020-10-28 2023-03-24 达闼机器人股份有限公司 Method for generating simulation image, electronic device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830706A (en) * 2018-08-08 2020-02-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN109829849A (en) * 2019-01-29 2019-05-31 深圳前海达闼云端智能科技有限公司 A kind of generation method of training data, device and terminal
CN111783525A (en) * 2020-05-20 2020-10-16 中国人民解放军93114部队 Aerial photographic image target sample generation method based on style migration

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022089143A1 (en) * 2020-10-28 2022-05-05 达闼机器人有限公司 Method for generating analog image, and electronic device and storage medium
WO2022206158A1 (en) * 2021-03-31 2022-10-06 商汤集团有限公司 Image generation method and apparatus, device, and storage medium
CN113469876A (en) * 2021-07-28 2021-10-01 北京达佳互联信息技术有限公司 Image style migration model training method, image processing method, device and equipment
CN113469876B (en) * 2021-07-28 2024-01-09 北京达佳互联信息技术有限公司 Image style migration model training method, image processing method, device and equipment
CN117152541A (en) * 2023-10-27 2023-12-01 浙江由由科技有限公司 Fresh commodity identification method combining space transformation with illuminance migration and result verification
CN117152541B (en) * 2023-10-27 2024-01-16 浙江由由科技有限公司 Fresh commodity identification method combining space transformation with illuminance migration and result verification

Also Published As

Publication number Publication date
CN112348737B (en) 2023-03-24
WO2022089143A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
CN112348737B (en) Method for generating simulation image, electronic device and storage medium
CN108734120B (en) Method, device and equipment for labeling image and computer readable storage medium
CN111161349B (en) Object posture estimation method, device and equipment
CN109583483B (en) Target detection method and system based on convolutional neural network
CN109446889B (en) Object tracking method and device based on twin matching network
CN108198145A (en) For the method and apparatus of point cloud data reparation
CN109344813B (en) RGBD-based target identification and scene modeling method
CN110133443B (en) Power transmission line component detection method, system and device based on parallel vision
CN104952056A (en) Object detecting method and system based on stereoscopic vision
CN109871829A (en) A kind of detection model training method and device based on deep learning
CN110765882A (en) Video tag determination method, device, server and storage medium
CN111680678A (en) Target area identification method, device, equipment and readable storage medium
CN108596032B (en) Detection method, device, equipment and medium for fighting behavior in video
JP2019185787A (en) Remote determination of containers in geographical region
CN114387199A (en) Image annotation method and device
CN114049536A (en) Virtual sample generation method and device, storage medium and electronic equipment
CN110007764B (en) Gesture skeleton recognition method, device and system and storage medium
CN114648709A (en) Method and equipment for determining image difference information
CN113435456A (en) Rock slice component identification method and device based on machine learning and medium
CN111797832A (en) Automatic generation method and system of image interesting region and image processing method
CN116597246A (en) Model training method, target detection method, electronic device and storage medium
CN115116052A (en) Orchard litchi identification method, device, equipment and storage medium
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN110148205A (en) A kind of method and apparatus of the three-dimensional reconstruction based on crowdsourcing image
CN113378864A (en) Method, device and equipment for determining anchor frame parameters and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 201100 2nd floor, building 2, No. 1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

GR01 Patent grant
GR01 Patent grant