CN112001983A - Method and device for generating occlusion image, computer equipment and storage medium - Google Patents
Method and device for generating occlusion image, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112001983A CN112001983A CN202011184326.6A CN202011184326A CN112001983A CN 112001983 A CN112001983 A CN 112001983A CN 202011184326 A CN202011184326 A CN 202011184326A CN 112001983 A CN112001983 A CN 112001983A
- Authority
- CN
- China
- Prior art keywords
- image
- occlusion
- generation model
- region
- trained
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000000694 effects Effects 0.000 claims abstract description 45
- 230000006870 function Effects 0.000 claims description 73
- 238000012549 training Methods 0.000 claims description 38
- 238000004590 computer program Methods 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 14
- 238000010276 construction Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000007499 fusion processing Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application relates to a method, a device, a computer device and a storage medium for generating an occlusion image, comprising: acquiring an original image; adding random noise on a selected area of an original image to form a first image; inputting a first image into a pre-constructed occlusion region generation model, converting random noise of a selected region into a corresponding occlusion effect according to random noise added to the selected region by the occlusion region generation model, and keeping a non-selected region of the first image unchanged to obtain and output a second image; and generating a second image output by the model according to the occlusion area to obtain an occlusion image corresponding to the original image. The method comprises the steps of inputting an image with random noise added to a selected area of an original image into a pre-constructed occlusion area generation model to obtain an occlusion image corresponding to the original image; comparatively real shielding images are generated in a large batch through the model, the problem that the shielding images are difficult to obtain in a large quantity is solved, and the efficiency of acquiring the shielding images is improved.
Description
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for generating an occlusion image, a computer device, and a storage medium.
Background
In an ADAS (Advanced Driving Assistance System) System, normal operation of a camera is the basis of normal operation of the entire System; in the process of long-time driving, the camera can not collect clear road scene images due to dust, oil stains or hardware faults.
Currently, people try to combine a deep learning technology into an ADAS system, so as to detect and eliminate an occlusion region in an image through a corresponding detection model; however, training of the detection model needs to rely on a large number of occlusion images, and the acquisition conditions of such images are harsh and difficult to acquire in large quantities, so that the efficiency of acquiring occlusion images in this way is low.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device and a storage medium for generating an occlusion image.
A method of generating an occlusion image, the method comprising:
acquiring an original image;
adding random noise on the selected area of the original image to form a first image;
inputting the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts the random noise of the selected region of the first image into a corresponding occlusion effect according to the random noise added to the selected region and keeps the non-selected region of the first image unchanged, and a second image is obtained and output;
and obtaining an occlusion image corresponding to the original image according to the second image output by the occlusion region generation model.
In one embodiment, before the random noise is added to the selected area of the original image to form the first image, the method further comprises:
randomly selecting a preset mask to be added to the original image;
and determining the selected area according to the coverage area of the original image formed by the preset mask.
In one embodiment, the randomly selecting a preset mask to be added to the original image includes:
randomly selecting a plurality of preset masks from a preset mask library;
generating a mask adding sequence based on the plurality of preset masks;
adding a plurality of masks in the mask addition sequence to the original image;
if overlapping areas exist among the masks added to the original image, performing fusion processing on the overlapping areas according to a preset mode; the preset mode comprises an intersection set fusion mode or an union set fusion mode.
In one embodiment, before the inputting the first image into the pre-constructed occlusion region generation model, the method further comprises:
acquiring a sample image set;
performing joint training on an occlusion region generation model to be trained and an occlusion region discrimination network based on the sample image set to construct the occlusion region generation model; and the shielding area judging network is used for judging the image output by the shielding area generation model to be trained in the model construction process.
In one embodiment, the sample image set comprises an original sample image and a real occlusion sample image;
the method for performing joint training on the occlusion region generation model to be trained and the occlusion region discrimination network based on the sample image set to construct the occlusion region generation model comprises the following steps:
adding random noise on a selected region of the original sample image;
inputting the original sample image added with the random noise into the occlusion region generation model to be trained, triggering the occlusion region generation model to be trained to convert the random noise into a corresponding predicted occlusion effect, and outputting a predicted occlusion sample image corresponding to the original sample image;
inputting the predicted occlusion sample image output by the occlusion region generation model to be trained into the occlusion region judgment network to obtain a judgment prediction result which is output by the occlusion region judgment network and is about whether the predicted occlusion sample image belongs to the real occlusion sample image;
constructing a first loss function of an occlusion region generation model to be trained based on the discrimination prediction result, the similarity degree of the prediction occlusion sample image and the non-selected region of the original sample image and the smooth constraint of the prediction occlusion effect; constructing a second loss function of the sheltered area discrimination network to be trained based on the discrimination prediction result and the discrimination real result;
and alternately training the occlusion region generation model to be trained and the occlusion region discrimination network based on the first loss function and the second loss function.
In one embodiment, the alternately training the occlusion region generation model to be trained and the occlusion region decision network based on the first loss function and the second loss function includes:
updating model parameters of the occlusion region generation model to be trained according to a first loss function value obtained by the first loss function, and predicting a judgment result about whether the predicted occlusion sample image is the type to which the real occlusion sample image belongs by the occlusion region judgment network on the predicted occlusion sample image output by the occlusion region generation model to be trained after model parameters are updated.
In one embodiment, the method further comprises: updating the network parameters of the sheltered area judgment network according to a second loss function value obtained by the second loss function; the second loss function value is obtained by predicting, by the occlusion region discrimination network, the discrimination result as to whether the predicted occlusion sample image is the class to which the true occlusion sample image belongs.
An apparatus to generate an occlusion image, the apparatus comprising:
the image acquisition module is used for acquiring an original image;
the noise adding module is used for adding random noise on the selected area of the original image to form a first image;
the noise conversion module is used for inputting the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts the random noise in the selected region of the first image into a corresponding occlusion effect according to the random noise added to the selected region, keeps the non-selected region of the first image unchanged, and obtains and outputs a second image;
and the image output module is used for generating the second image output by the model according to the occlusion area to obtain an occlusion image corresponding to the original image.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an original image;
adding random noise on the selected area of the original image to form a first image;
inputting the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts the random noise of the selected region of the first image into a corresponding occlusion effect according to the random noise added to the selected region and keeps the non-selected region of the first image unchanged, and a second image is obtained and output;
and obtaining an occlusion image corresponding to the original image according to the second image output by the occlusion region generation model.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an original image;
adding random noise on the selected area of the original image to form a first image;
inputting the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts the random noise of the selected region of the first image into a corresponding occlusion effect according to the random noise added to the selected region and keeps the non-selected region of the first image unchanged, and a second image is obtained and output;
and obtaining an occlusion image corresponding to the original image according to the second image output by the occlusion region generation model.
The method, the device, the computer equipment and the storage medium for generating the occlusion image comprise the following steps: acquiring an original image; adding random noise on a selected area of an original image to form a first image; inputting a first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts random noise of a selected region of the first image into a corresponding occlusion effect according to random noise added to the selected region and keeps a non-selected region of the first image unchanged, and a second image is obtained and output; and generating a second image output by the model according to the occlusion area to obtain an occlusion image corresponding to the original image. The scheme includes that images with random noise are added to selected areas of original images, and a pre-constructed occlusion area generation model is input to obtain occlusion images corresponding to the original images; comparatively real sheltering from the image can be generated in batches through the model, sheltering from the image and can be used for training the restoration model again, has solved and has sheltered from the image and be difficult to acquire in a large number and lead to sheltering from the lower problem of efficiency that the image acquireed to show the efficiency that has improved and sheltered from the image and acquireed.
Drawings
FIG. 1 is a diagram of an application environment of a method for generating an occlusion image in one embodiment.
FIG. 2 is a flow diagram illustrating a method for generating an occlusion image, according to one embodiment.
FIG. 3a is an exemplary diagram of a normal image in one embodiment.
FIG. 3b is an exemplary diagram of an occlusion image in one embodiment.
FIG. 4a is an exemplary diagram of a default mask in one embodiment.
FIG. 4b is a graph illustrating an exemplary effect of adding random noise to a selected area in one embodiment.
FIG. 4c is a diagram illustrating a comparison of an original image and a second image, in accordance with one embodiment.
FIG. 5a is an exemplary diagram of a set of occlusion image effects in one embodiment.
FIG. 5b is an illustration of an alternative set of occlusion image effects in one embodiment.
FIG. 6 is a flowchart illustrating the steps for determining a selected area for an area covered by an original image according to a predetermined mask in one embodiment.
FIG. 7 is a flowchart illustrating the steps of randomly selecting a default mask to be added to the original image according to an embodiment.
FIG. 8 is a flowchart illustrating a method for performing joint training on an occlusion region generation model and an occlusion region classification network to be trained in one embodiment.
FIG. 9 is a flowchart illustrating a method for constructing an occlusion region generation model in one embodiment.
FIG. 10 is a schematic flow diagram of creating an occlusion region generation model, under an embodiment.
FIG. 11 is a diagram of an example effect of generating a false occlusion texture in the absence of a smoothing constraint in one embodiment.
FIG. 12 is a block diagram of an apparatus that generates an occlusion image in one embodiment.
FIG. 13 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for generating the occlusion image can be applied to the application environment shown in fig. 1. Wherein the terminal 11 communicates with the server 12 via a network. The server 12 acquires an original image from the terminal; the server 12 adds random noise to the selected area of the original image to form a first image; the server 12 inputs the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts random noise of the selected region of the first image into a corresponding occlusion effect according to random noise added to the selected region and keeps the non-selected region of the first image unchanged, and a second image is obtained and output; the server 12 generates a second image output by the model according to the occlusion region to obtain an occlusion image corresponding to the original image, and sends the occlusion image to the terminal 11 or a preset database. The occlusion image corresponding to the original image can be used for enriching a database of occlusion images, and a large number of occlusion images similar to the occlusion images obtained by real acquisition are provided for subsequent training when the occlusion images are used as models of training data, such as an occlusion repairing model, an occlusion recognition model and the like.
The terminal 11 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and may also be, but not limited to, various vehicle-mounted devices, such as a vehicle event data recorder, a vehicle-mounted camera, a vehicle-mounted monitor, a reversing radar, and the like; the server 12 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a method for generating an occlusion image is provided, which is exemplified by the method applied to the server 12 in fig. 1, and includes the following steps:
The sources of the original images are shown in fig. 3a and 3b, and the sources are mainly divided into two categories, including a normal image (shown in fig. 3 a) and an occlusion image (shown in fig. 3 b); the original image is an image which is acquired by corresponding camera equipment in the normal running process of the vehicle and can reflect the road condition or the driving vision of a driver; the normal image is an image which does not obviously shield the driving vision of the driver and does not influence the driving, and the shielded image is an image which obviously shields the driving vision of the driver and reflects the condition that the driving vision of the driver is blocked; the original image can be an image obtained by directly acquiring shot images by various vehicle-mounted equipment, or an image obtained by intercepting the images for multiple times from a video recording file acquired by the vehicle-mounted equipment.
Specifically, the server may directly obtain a corresponding original image from a preset database, or may be connected to the vehicle-mounted terminal to obtain a picture or video file stored on the vehicle-mounted terminal; if the video file is the video file, sampling the video file according to frames to obtain a picture file capable of reflecting the whole video; judging the size, the definition and the like of the obtained picture, if the size or the definition reaches a preset standard, keeping the picture, for example, discarding the shielded image and keeping a normal image; if the picture does not meet the preset standard, the picture is abandoned. The image definition can be comprehensively judged by calculating the number and the size of pixel points contained in the image. And finally, taking the image obtained after screening as an original image.
Carrying out size normalization processing on the original image, unifying the size of the image, for example, converting the image into 256 × 256 size images as the original image; it should be noted that the image processing step includes not only the screening and size normalization, but also various image processing steps such as brightness adjustment, sharpness adjustment, contrast adjustment, gray scale adjustment, mirroring, sharpening, and the like.
The image is obtained in various modes and is subjected to various processing, so that the richness of the type and the quantity of the original image is improved.
The selected area of the original image can be realized by adding masks, and the masks are of various types and can be randomly generated, as shown in fig. 4 a; after the mask is added, the area where the mask is located or the area where the non-mask is located can be determined as a selected area, for example, the area 401-B, the area 402-B and the area 403-B in fig. 4a can be used as mask areas, and then the area 401-G, the area 402-G and the area 403-G are non-mask areas; and vice versa; a fusion operation may be performed between the masks.
The random noise refers to unnecessary or redundant interference information existing in the image data, and as shown in fig. 4b, for an exemplary effect after the original image is overlaid with the mask to obtain the selected region and then the random noise is added to the selected region, it can be seen that the region where the random noise is added to the selected region is almost invisible, and has a large difference with the original image; while the non-selected regions remain consistent with the original image with no significant change.
Specifically, the server firstly selects a mask to be superposed on the acquired original image, and determines a selected area and a non-selected area according to the area of the mask; and processing the selected area through a random noise algorithm, adding random noise to the selected area of the original image, and forming a first image with a part of the image being random noise and the other part being the effect of the original image.
Determining a selected area of an original image through a mask, and adding random noise to the selected area to obtain the original image added with the random noise as a first image; the random noise provides a random initial value for subsequent image conversion, and the generation effect of the shielding image is improved.
And step 23, inputting the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts random noise of the selected region of the first image into a corresponding occlusion effect according to the random noise added to the selected region, and keeps the non-selected region of the first image unchanged, thereby obtaining and outputting a second image.
The method comprises the steps that a pre-constructed occlusion region generation model can identify random noise added to a selected region in a first image, and converts the random noise into a corresponding occlusion effect according to the characteristics of the random noise region, and meanwhile, the content of a non-selected region is not changed; thereby obtaining a second image.
Specifically, the occlusion region generation model can be pre-constructed by training a GAN network model (Generative adaptive Networks); the pre-constructed occlusion region generation model is capable of transforming the input first image into a second image similar to the real generated occlusion image. The pre-constructed occlusion region generation model converts random noise into a corresponding occlusion effect, so that the obtained and output second image has different occlusion effects; for example, as shown in fig. 4c, for the original image input with the pre-constructed occlusion region generation model and the second image output with the pre-constructed occlusion region generation model, it can be seen that the upper half of the second image has higher consistency with the original image, and the lower half adds the occlusion effect.
In the step, random noise is converted into a corresponding shielding effect through a pre-constructed shielding region generation model, so that the conversion of the original image is realized, the problem that the shielding image is difficult to obtain in large quantity is solved, and the acquisition efficiency of the shielding image is improved.
And 24, generating a second image output by the model according to the occlusion area to obtain an occlusion image corresponding to the original image.
Specifically, after the server obtains the second image output by the occlusion region generation model, the server can process the second image according to subsequent utilization conditions to obtain an occlusion image corresponding to the original image; for example, if the occlusion image corresponding to the original image is to be input into the occlusion region restoration model to be trained subsequently, the second image output by the occlusion region generation model is subjected to image processing, format processing, and the like according to the parameter setting of the occlusion region restoration model to be trained, and the processed second image is used as the occlusion image corresponding to the original image.
Referring to fig. 5a and 5b, two sets of occlusion images output by the occlusion region generation model are shown, from left to right, as original images 5a01 and 5b02, preset masks 5a02 and 5b02, and occlusion images 5a03 and 5b03 corresponding to the original images. In the two groups of images, the obtained occlusion areas in the occlusion images correspond to the distribution positions of the selected areas.
The step can correspondingly process the image according to the subsequent different utilization conditions, so that the output image can be directly used, the problem that the shielding image is difficult to obtain in a large quantity is solved, and the acquisition efficiency of the shielding image is improved.
In the method for generating the occlusion image, the original image is obtained; adding random noise on a selected area of an original image to form a first image; inputting a first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts random noise of a selected region of the first image into a corresponding occlusion effect according to random noise added to the selected region and keeps a non-selected region of the first image unchanged, and a second image is obtained and output; and obtaining a shielding image corresponding to the original image according to the second image output by the shielding region generation model, solving the problem that the shielding image is difficult to obtain in large quantity, and improving the efficiency of obtaining the shielding image.
In one embodiment, as shown in FIG. 6, before adding random noise to the selected area of the original image to form the first image, the method further comprises:
Specifically, the preset masks may randomly select one or more preset masks to be added to the original image, the manner of adding the masks to the original image is not particularly limited, the sizes and the dimensions of the preset masks may be different from each other, the sizes and the dimensions of the preset masks and the original image may also be different from each other, and a random rotation angle may be set between the preset masks added to the original image, so as to achieve a better random effect. A plurality of preset masks may be superimposed on each other on the original image, and then a coverage area formed by the random mask on the original image is determined as the selected area based on the overall superimposition result.
In the embodiment, the coverage area with higher random degree is formed by randomly selecting the preset mask and randomly adding the preset mask to the original image, so that the randomness of the selected area determining process is improved.
In one embodiment, as shown in fig. 7, randomly selecting a predetermined mask to be added to the original image includes:
Specifically, a plurality of preset masks are stored in a preset mask library, a server selects the plurality of preset masks from the preset mask library in a random mode, a mask adding sequence is generated according to the selected preset masks, and the sequence of the preset masks in the sequence can be arranged according to the selected sequence; after the masks in the sequence are added to the image, overlapping regions among a plurality of masks may exist in the image, the overlapping regions can be fused in an intersection or union mode, the intersection fusion mode is adopted, namely, mutually overlapped partial regions among the masks are reserved, and the regions which are not overlapped with other masks are discarded; the union set fusion mode is to reserve both the overlapped area and the non-overlapped area, and the area obtained by the union set fusion mode is usually larger than that obtained by the intersection set fusion mode under the same overlapped area.
In another embodiment, randomly selecting the preset mask to be added to the original image further includes: randomly selecting a preset mask from a preset mask library; generating a mask adding sequence based on the preset mask; adding a plurality of masks in a mask adding sequence to the original image; if overlapping areas exist among the multiple masks added to the original image, performing fusion processing on the overlapping areas according to a preset mode; the preset mode comprises an intersection set fusion mode or a union set fusion mode.
Specifically, only one preset mask can be selected from a preset database and a mask adding sequence is generated; the method comprises the steps that a plurality of sub-masks with different shapes can be obtained through various forms of random splitting, multi-angle superposition, rotation and the like of a preset mask, and a mask adding sequence can be generated after the plurality of sub-masks are randomly arranged; the remaining steps are the same as in the above example.
In one embodiment, as shown in fig. 8, before the first image is input into the pre-constructed occlusion region generation model, the method further comprises:
The sample image set is an image set formed by images obtained by real acquisition; the occlusion region generation model is obtained by training a neural network model, and the adopted neural network model is obtained by training a Generation Adaptive Network (GAN); generating the countermeasure network generally includes generating a network G, which is a model for generating the occlusion region to be trained, and determining a network D, which is an occlusion region determining network.
Specifically, a normal image (i.e., an image without occlusion) in the sample image set is defined as Real _ a, and an occlusion image in the sample image set is defined as Real _ B; the purpose of performing joint training on the occlusion region generation model to be trained and the occlusion region discrimination network is to enable the occlusion region generation model to be trained to convert the Real _ a image into an image of Real _ B style, and obtain a Fake _ B image (i.e. a non-Real occlusion image generated by simulating the Real _ B style according to the Real _ a image) which is structurally similar to the Real _ B after conversion. The occlusion region judging network plays a role in judging a Real _ B image and a Fake _ B image in the process of joint training; according to the judgment result, the shielding region generation model to be trained and the shielding region judgment network can be played mutually in the training process, so that the Fake _ B image generated by the shielding region generation model to be trained according to the Real _ A image is more vivid and is close to the Real condition.
In the embodiment, the occlusion region generation model to be trained and the occlusion region discrimination network are jointly trained, so that the characteristic of generating the countermeasure network is utilized, and the construction effect of the occlusion region generation model to be trained is improved.
In one embodiment, the sample image set comprises an original sample image and a true occlusion sample image; as shown in fig. 9, based on the sample image set, performing joint training on the occlusion region generation model to be trained and the occlusion region discrimination network, and constructing the occlusion region generation model, including:
and step 95, alternately training an occlusion region generation model to be trained and an occlusion region discrimination network based on the first loss function and the second loss function.
Referring to fig. 10, in this embodiment, first, a size adjustment preprocessing is performed on an original sample image, a preset mask is selected and superimposed on the original sample image after the preprocessing to form a selected region and a non-selected region, and random noise is added to the selected region; and then inputting the original sample image added with the random noise into an occlusion region generation model to be trained, so that the occlusion region generation model to be trained outputs a predicted occlusion sample image with a predicted occlusion effect.
And the occlusion area judging network judges the predicted occlusion sample image by combining the real occlusion sample image according to the input predicted occlusion sample image to obtain a judgment prediction result about whether the predicted occlusion sample image is the type of the real occlusion sample image.
After the judgment prediction result is obtained, a first loss function of the sheltered area generation model to be trained and a second loss function of the sheltered area judgment network can be respectively constructed.
The alternative training of the sheltered area generation model to be trained and the sheltered area judgment network training refers to outputting a sheltered image to deceive the sheltered area judgment network by the sheltered area generation model to be trained, and then judging the authenticity of the sheltered image by the sheltered area judgment network. It can be understood that the occlusion region to be trained generates a model, and the purpose of training is to enable a predicted occlusion sample image output by the occlusion region to be trained to achieve the effect of false or spurious. In other words, it is difficult for the occlusion region determination network to determine whether the predicted occlusion sample image is the generated occlusion sample image or the real occlusion sample image.
Specifically, a consistency evaluation function term of the occlusion region generation model to be trained is defined first. The consistency evaluation adopts a structural similarity function, namely a function for judging the similarity degree of the non-selected area of the predicted shielding sample image and the original sample image; the definition for evaluating the similarity of two pixels i, j is as follows:
wherein, muiAnd mujRespectively represent the mean values of pixels, σ, in a W × W neighborhood centered on pixels i and jiAnd σjIs the pixel variance, σ, in the W × W neighborhoodijIs the covariance of two image areas, c1And c2Two parameters are used to ensure that the denominator of the formula is valid. In the application, only the similarity between the non-selected area of the predicted occlusion sample image and the original sample image is calculated, that is, the consistency between the non-selected area and the original sample image is ensured, and the corresponding loss function is as follows:
where N is the number of pixels of M (i) =1, M is the image mask, and the higher the image similarity is, the smaller the loss function is.
Generating a countermeasure network usually adopts a minimum-maximum (mini-max objective) loss function, G is a generation network, namely an occlusion region generation model to be trained of the application, and D is a discrimination network, namely an occlusion region discrimination network of the application; z represents the introduced random noise; the input of the generating network G is noise data z, and the output is generated analog data G (z); the discrimination network D takes the real data D or the generated data g (z) as input and determines the source of the input. The objective function is:
wherein,representing an objective function needing to be optimized in the generation countermeasure network;representing image distribution in x-compliant datasets;E[]Expressing the mathematical expectation;representing z obeying a prior distribution pz,pzIs in uniform distribution or Gaussian distribution; d in log (D (x)) represents a probability estimation output for the discrimination network D to discriminate the authenticity of the real data x, and D (G (z)) represents a probability estimation output for the discrimination network D to discriminate the authenticity of the generated network G generated image G (z).
The problem that the gradient disappears exists in the generation of the countermeasure network by adopting cross entropy loss function training, so that the judging function of the application adopts a least square function:
and c is a target value of the predicted occlusion sample image generated by the occlusion region generation model G to be trained determined by the occlusion region determination network D.
Meanwhile, in order to promote the predicted occlusion effect to have the characteristic of occlusion blurring, in the process of training an occlusion region to be trained to generate a model G, a smooth constraint function is added as follows:
wherein M is a predetermined mask, ∇xAnd ∇yRespectively representing gradient operators in x and y directions, and N is the number of pixels with the M value of 0. As shown in FIG. 11, the lack of a smoothing constraint tends to generate false occlusion textures, with a more pronounced gradient, which does not conform to the characteristics of occlusion blurring.
In summary, the first loss function corresponding to the occlusion region generation model G to be trained is obtained by determining the prediction result, the similarity degree between the predicted occlusion sample image and the non-selected region of the original sample image, and the smooth constraint construction of the predicted occlusion effect, which are specifically as follows:
wherein, alpha and beta are weight parameters for balancing the influence of three function terms.
In the joint training, the occlusion region discrimination network D is also trained, and the purpose of the training is to make the occlusion region discrimination network distinguish the predicted occlusion sample image from the real occlusion sample image as much as possible, so as to determine whether the predicted occlusion sample image is the category to which the real occlusion sample image belongs. Therefore, the second loss function is constructed by the difference degree of the judgment prediction result and the judgment real result, and the minimization form is as follows:
wherein, a represents the target value of the set predicted occlusion sample image, and b represents the target value of the real occlusion sample image. By minimizing this loss function, the occlusion region discrimination network D is enabled to clearly distinguish the two different data classes of the true occlusion sample image and the predicted occlusion sample image, which is generated by G.
In one embodiment, alternately training an occlusion region generation model to be trained and an occlusion region discriminant network based on a first loss function and a second loss function includes: updating model parameters of the occlusion region generation model to be trained according to a first loss function value obtained by the first loss function, and predicting the judgment result about whether the predicted occlusion sample image is the type of the real occlusion sample image or not by using a predicted occlusion sample image output by the occlusion region generation model to be trained after the model parameters are updated by an occlusion region judgment network.
The first loss function value is a parameter for evaluating the prediction effect of the occlusion region generation model to be trained, and generally, the smaller the first loss function value is, the better the prediction effect of the occlusion region generation model to be trained is represented.
Specifically, based on the first loss function and the second loss function, the occlusion region generation model and the occlusion region discrimination network to be trained are trained in an alternate iterative update mode. The first loss function value obtained by the first loss function is used for updating the model parameters of the shielding area generation model to be trained; judging the predicted occlusion sample image output by the occlusion area generation model to be trained again by the occlusion area judgment network after each updating, updating the parameters of the occlusion area judgment network model according to the second loss function value, and improving the judgment capability of the occlusion area judgment network model; then, predicting and updating the model parameters of the occlusion region generation model to be trained again according to the judgment result, and repeating the cyclic training step until the preset training condition of the occlusion region generation model to be trained is reached, wherein the ending condition can be that the training frequency reaches a training frequency threshold value, or that the predicted occlusion sample image output by the occlusion region generation model to be trained reaches a preset occlusion effect, and is not limited herein.
In one embodiment, alternately training an occlusion region generation model to be trained and an occlusion region discrimination network based on a first loss function and a second loss function, further includes: a second loss function value obtained according to the second loss function is used for updating the network parameter of the judgment network of the sheltered area; the second loss function value is obtained by predicting the judgment result about whether the predicted occlusion sample image is the true occlusion sample image or not by the occlusion region judgment network. The second loss function value is a parameter for evaluating the classification effect of the occlusion region discrimination network, and the parameter of the occlusion region discrimination network is adjusted based on the second loss function value so as to achieve a more accurate classification effect.
After the completion of a new round of training of the occlusion area distinguishing network, the current occlusion area distinguishing network has higher capability of distinguishing true samples from false samples compared with the occlusion area distinguishing network before updating. Therefore, when the parameters of the occlusion region discrimination network are fixed, the occlusion region generation model is trained again, so that the occlusion region generation network to be trained and the occlusion region discrimination network are alternately updated in an iterative manner.
In one embodiment, firstly, the model parameters of the occlusion region generation model to be trained are fixed, and the occlusion region discrimination network is trained and updated, so that the classification capability of the trained occlusion region discrimination network is maintained. After the sheltering area distinguishing network is trained, the sheltering area generating model to be trained is trained and updated, at the moment, the network parameters of the sheltering area distinguishing network are fixed and unchanged, only the loss or the error generated by the sheltering area generating model to be trained is transmitted to the sheltering area generating model to be trained, namely, a first loss function value is obtained according to the output of the updated sheltering area distinguishing network, and the model parameters of the sheltering area generating model to be trained are updated based on the first loss function value. And judging a countermeasure game between the network and the shielding region generation model to be trained through the shielding region, so that the two network models finally reach a stable state. It should be understood that although the various steps in the flowcharts of fig. 2, 6-9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 6-9 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps or stages.
In one embodiment, as shown in fig. 12, there is provided an apparatus for generating an occlusion image, including: an image acquisition module 121, a noise adding module 122, a noise conversion module 123 and an image output module 124, wherein:
an image obtaining module 121, configured to obtain an original image;
a noise adding module 122, configured to add random noise to a selected area of the original image to form a first image;
the noise conversion module 123 is configured to input the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts random noise in the selected region of the first image into a corresponding occlusion effect according to random noise added to the selected region, and keeps a non-selected region of the first image unchanged, thereby obtaining and outputting a second image;
and the image output module 124 is configured to obtain an occlusion image corresponding to the original image according to the second image output by the occlusion region generation model.
In one embodiment, the device for generating the occlusion image further comprises a region selection module, configured to randomly select a preset mask to be added to the original image; and determining the selected area according to the coverage area formed by the preset mask on the original image.
In one embodiment, the region selection module is further configured to randomly select a plurality of preset masks from a preset mask library; generating a mask adding sequence based on a plurality of preset masks; adding a plurality of masks in a mask adding sequence to the original image; if overlapping areas exist among the multiple masks added to the original image, performing fusion processing on the overlapping areas according to a preset mode; the preset mode comprises an intersection set fusion mode or a union set fusion mode.
In one embodiment, the device for generating an occlusion image further comprises a model construction module for obtaining a sample image set; performing joint training on an occlusion region generation model to be trained and an occlusion region discrimination network based on the sample image set to construct an occlusion region generation model; and the occlusion region distinguishing network is used for distinguishing the image output by the occlusion region generation model to be trained in the model construction process.
In one embodiment, the sample image set comprises an original sample image and a true occlusion sample image; the model building module is also used for adding random noise on the selected area of the original sample image; inputting the original sample image added with the random noise into an occlusion area generation model to be trained, triggering the occlusion area generation model to be trained to convert the random noise into a corresponding predicted occlusion effect, and outputting a predicted occlusion sample image corresponding to the original sample image; inputting a predicted occlusion sample image output by an occlusion region generation model to be trained into an occlusion region judgment network to obtain a judgment prediction result which is output by the occlusion region judgment network and is about whether the predicted occlusion sample image is a type to which a real occlusion sample image belongs; constructing a first loss function of an occlusion region generation model to be trained based on the discrimination prediction result, the similarity degree of the prediction occlusion sample image and the non-selected region of the original sample image and the smooth constraint of the prediction occlusion effect; constructing a second loss function of the sheltered area discrimination network to be trained based on the discrimination prediction result and the discrimination real result; and alternately training an occlusion region generation model to be trained and an occlusion region discrimination network based on the first loss function and the second loss function.
In an embodiment, the model building module is further configured to update a model parameter of the occlusion region generation model to be trained according to a first loss function value obtained by the first loss function, and perform, by the occlusion region determination network, prediction of a determination result as to whether the prediction occlusion sample image is a type to which the real occlusion sample image belongs, on a prediction occlusion sample image output by the occlusion region generation model to be trained after the model parameter is updated.
In one embodiment, the model construction module is further configured to update a network parameter of the occlusion region discriminant network according to a second loss function value obtained by the second loss function; the second loss function value is obtained by predicting the judgment result about whether the predicted occlusion sample image is the true occlusion sample image or not by the occlusion region judgment network.
For specific limitations of the apparatus for generating the occlusion image, reference may be made to the above limitations of the method for generating the occlusion image, which are not described herein again. The modules in the device for generating the occlusion image can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 13. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data for generating occlusion images. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of generating an occlusion image.
Those skilled in the art will appreciate that the architecture shown in fig. 13 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an original image;
adding random noise on a selected area of an original image to form a first image;
inputting a first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts random noise of a selected region of the first image into a corresponding occlusion effect according to random noise added to the selected region and keeps a non-selected region of the first image unchanged, and a second image is obtained and output;
and generating a second image output by the model according to the occlusion area to obtain an occlusion image corresponding to the original image.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an original image;
adding random noise on a selected area of an original image to form a first image;
inputting a first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts random noise of a selected region of the first image into a corresponding occlusion effect according to random noise added to the selected region and keeps a non-selected region of the first image unchanged, and a second image is obtained and output;
and generating a second image output by the model according to the occlusion area to obtain an occlusion image corresponding to the original image.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method of generating an occlusion image, the method comprising:
acquiring an original image;
adding random noise on the selected area of the original image to form a first image;
inputting the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts the random noise of the selected region of the first image into a corresponding occlusion effect according to the random noise added to the selected region and keeps the non-selected region of the first image unchanged, and a second image is obtained and output;
and obtaining an occlusion image corresponding to the original image according to the second image output by the occlusion region generation model.
2. The method of claim 1, wherein prior to adding random noise to the selected region of the original image to form the first image, the method further comprises:
randomly selecting a preset mask to be added to the original image;
and determining the selected area according to the coverage area of the original image formed by the preset mask.
3. The method of claim 2, wherein the randomly selecting a default mask to add to the original image comprises:
randomly selecting a plurality of preset masks from a preset mask library;
generating a mask adding sequence based on the plurality of preset masks;
adding a plurality of masks in the mask addition sequence to the original image;
if overlapping areas exist among the masks added to the original image, performing fusion processing on the overlapping areas according to a preset mode; the preset mode comprises an intersection set fusion mode or an union set fusion mode.
4. The method of any of claims 1 to 3, wherein prior to inputting the first image into a pre-constructed occlusion region generation model, the method further comprises:
acquiring a sample image set;
performing joint training on an occlusion region generation model to be trained and an occlusion region discrimination network based on the sample image set to construct the occlusion region generation model; and the shielding area judging network is used for judging the image output by the shielding area generation model to be trained in the model construction process.
5. The method of claim 4, wherein the sample image set comprises original sample images and true occlusion sample images;
the method for performing joint training on the occlusion region generation model to be trained and the occlusion region discrimination network based on the sample image set to construct the occlusion region generation model comprises the following steps:
adding random noise on a selected region of the original sample image;
inputting the original sample image added with the random noise into the occlusion region generation model to be trained, triggering the occlusion region generation model to be trained to convert the random noise into a corresponding predicted occlusion effect, and outputting a predicted occlusion sample image corresponding to the original sample image;
inputting the predicted occlusion sample image output by the occlusion region generation model to be trained into the occlusion region judgment network to obtain a judgment prediction result which is output by the occlusion region judgment network and is about whether the predicted occlusion sample image belongs to the real occlusion sample image;
constructing a first loss function of an occlusion region generation model to be trained based on the discrimination prediction result, the similarity degree of the prediction occlusion sample image and the non-selected region of the original sample image and the smooth constraint of the prediction occlusion effect; constructing a second loss function of the sheltered area discrimination network to be trained based on the discrimination prediction result and the discrimination real result;
and alternately training the occlusion region generation model to be trained and the occlusion region discrimination network based on the first loss function and the second loss function.
6. The method according to claim 5, wherein alternately training the occlusion region generation model to be trained and the occlusion region discriminant network based on the first loss function and the second loss function comprises:
updating model parameters of the occlusion region generation model to be trained according to a first loss function value obtained by the first loss function, and predicting a judgment result about whether the predicted occlusion sample image is the type to which the real occlusion sample image belongs by the occlusion region judgment network on the predicted occlusion sample image output by the occlusion region generation model to be trained after model parameters are updated.
7. The method of claim 6, further comprising:
updating the network parameters of the sheltered area judgment network according to a second loss function value obtained by the second loss function; the second loss function value is obtained by predicting, by the occlusion region discrimination network, the discrimination result as to whether the predicted occlusion sample image is the class to which the true occlusion sample image belongs.
8. An apparatus for generating an occlusion image, the apparatus comprising:
the image acquisition module is used for acquiring an original image;
the noise adding module is used for adding random noise on the selected area of the original image to form a first image;
the noise conversion module is used for inputting the first image into a pre-constructed occlusion region generation model, so that the occlusion region generation model converts the random noise in the selected region of the first image into a corresponding occlusion effect according to the random noise added to the selected region, keeps the non-selected region of the first image unchanged, and obtains and outputs a second image;
and the image output module is used for generating the second image output by the model according to the occlusion area to obtain an occlusion image corresponding to the original image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011184326.6A CN112001983B (en) | 2020-10-30 | 2020-10-30 | Method and device for generating occlusion image, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011184326.6A CN112001983B (en) | 2020-10-30 | 2020-10-30 | Method and device for generating occlusion image, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112001983A true CN112001983A (en) | 2020-11-27 |
CN112001983B CN112001983B (en) | 2021-02-09 |
Family
ID=73475261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011184326.6A Active CN112001983B (en) | 2020-10-30 | 2020-10-30 | Method and device for generating occlusion image, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112001983B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113435358A (en) * | 2021-06-30 | 2021-09-24 | 北京百度网讯科技有限公司 | Sample generation method, device, equipment and program product for training model |
CN113486377A (en) * | 2021-07-22 | 2021-10-08 | 维沃移动通信(杭州)有限公司 | Image encryption method and device, electronic equipment and readable storage medium |
CN114898318A (en) * | 2022-05-24 | 2022-08-12 | 昆明理工大学 | Dynamic data enhancement method for lane line detection |
CN115393183A (en) * | 2022-10-28 | 2022-11-25 | 腾讯科技(深圳)有限公司 | Image editing method and device, computer equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109003253A (en) * | 2017-05-24 | 2018-12-14 | 通用电气公司 | Neural network point cloud generates system |
-
2020
- 2020-10-30 CN CN202011184326.6A patent/CN112001983B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109003253A (en) * | 2017-05-24 | 2018-12-14 | 通用电气公司 | Neural network point cloud generates system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113435358A (en) * | 2021-06-30 | 2021-09-24 | 北京百度网讯科技有限公司 | Sample generation method, device, equipment and program product for training model |
CN113435358B (en) * | 2021-06-30 | 2023-08-11 | 北京百度网讯科技有限公司 | Sample generation method, device, equipment and program product for training model |
CN113486377A (en) * | 2021-07-22 | 2021-10-08 | 维沃移动通信(杭州)有限公司 | Image encryption method and device, electronic equipment and readable storage medium |
CN114898318A (en) * | 2022-05-24 | 2022-08-12 | 昆明理工大学 | Dynamic data enhancement method for lane line detection |
CN115393183A (en) * | 2022-10-28 | 2022-11-25 | 腾讯科技(深圳)有限公司 | Image editing method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112001983B (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112001983B (en) | Method and device for generating occlusion image, computer equipment and storage medium | |
WO2021017261A1 (en) | Recognition model training method and apparatus, image recognition method and apparatus, and device and medium | |
CN111080628B (en) | Image tampering detection method, apparatus, computer device and storage medium | |
CN110222787B (en) | Multi-scale target detection method and device, computer equipment and storage medium | |
CN111524137B (en) | Cell identification counting method and device based on image identification and computer equipment | |
CN111611873B (en) | Face replacement detection method and device, electronic equipment and computer storage medium | |
CN111862044A (en) | Ultrasonic image processing method and device, computer equipment and storage medium | |
CN111667001B (en) | Target re-identification method, device, computer equipment and storage medium | |
CN112784810A (en) | Gesture recognition method and device, computer equipment and storage medium | |
CN109903272B (en) | Target detection method, device, equipment, computer equipment and storage medium | |
CN111292377B (en) | Target detection method, device, computer equipment and storage medium | |
CN110956628B (en) | Picture grade classification method, device, computer equipment and storage medium | |
CN113706564A (en) | Meibomian gland segmentation network training method and device based on multiple supervision modes | |
CN112884782B (en) | Biological object segmentation method, apparatus, computer device, and storage medium | |
CN111401387A (en) | Abnormal sample construction method and device, computer equipment and storage medium | |
CN111445487A (en) | Image segmentation method and device, computer equipment and storage medium | |
CN114724218A (en) | Video detection method, device, equipment and medium | |
CN110135428B (en) | Image segmentation processing method and device | |
CN115909172A (en) | Depth-forged video detection, segmentation and identification system, terminal and storage medium | |
Kim et al. | Generalized facial manipulation detection with edge region feature extraction | |
CN111340025A (en) | Character recognition method, character recognition device, computer equipment and computer-readable storage medium | |
CN111803956B (en) | Method and device for determining game plug-in behavior, electronic equipment and storage medium | |
CN116612355A (en) | Training method and device for face fake recognition model, face recognition method and device | |
CN117037244A (en) | Face security detection method, device, computer equipment and storage medium | |
CN115249358A (en) | Method and system for quantitatively detecting carbon particles in macrophages and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong 518000 Patentee after: Shenzhen Youjia Innovation Technology Co.,Ltd. Address before: 518051 1101, west block, Skyworth semiconductor design building, 18 Gaoxin South 4th Road, Gaoxin community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province Patentee before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd. |
|
CP03 | Change of name, title or address |