CN113870195A - Training of target texture detection model, texture detection method and device - Google Patents

Training of target texture detection model, texture detection method and device Download PDF

Info

Publication number
CN113870195A
CN113870195A CN202111055638.1A CN202111055638A CN113870195A CN 113870195 A CN113870195 A CN 113870195A CN 202111055638 A CN202111055638 A CN 202111055638A CN 113870195 A CN113870195 A CN 113870195A
Authority
CN
China
Prior art keywords
target
texture
detection model
sample image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111055638.1A
Other languages
Chinese (zh)
Inventor
钟东宏
李云锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111055638.1A priority Critical patent/CN113870195A/en
Publication of CN113870195A publication Critical patent/CN113870195A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种目标贴图检测模型的训练、贴图检测方法及装置,包括:获取多个贴图素材以及多个样本图像;将贴图素材添加至样本图像中,得到包含贴图素材的目标样本图像;根据目标样本图像,以及目标样本图像包括的贴图素材在目标样本图像中的第一位置,训练初始贴图检测模型,得到目标贴图检测模型,目标贴图检测模型用于检测待检测图像中包括的贴图素材的位置。本公开无需针对每一个贴图素材单独进行处理,应用环节仅需要通过训练好的目标贴图检测模型,来对待检测图像中的贴图素材位置进行识别检测,整个过程能适用于大量不同贴图的检测场景,减少贴图检测过程的资源需求和时间消耗,提高检测效率,降低响应时间。

Figure 202111055638

The present disclosure provides a training and texture detection method and device for a target texture detection model, including: acquiring multiple texture materials and multiple sample images; adding the texture materials to the sample images to obtain a target sample image containing the texture materials; According to the target sample image and the first position of the texture material included in the target sample image in the target sample image, the initial texture detection model is trained to obtain the target texture detection model. The target texture detection model is used to detect the texture material included in the image to be detected. s position. The present disclosure does not need to process each texture material separately, and the application process only needs to identify and detect the position of the texture material in the image to be detected through the trained target texture detection model. The whole process can be applied to a large number of different texture detection scenarios. Reduce the resource requirements and time consumption of the texture detection process, improve the detection efficiency, and reduce the response time.

Figure 202111055638

Description

Target map detection model training and map detection method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computers, and in particular relates to a training method and device of a target map detection model, a map detection method and device, electronic equipment, a computer storage medium and a computer program product.
Background
The sticker special effect is a popular special effect type at present, and a user can realize personalized effect display by adding the sticker in the picture or the video. The function of detecting a sticker for a sticker special effect is also an important subject of current research, and the purpose of the function is to detect whether a sticker is present in an image, and information such as the position and size of the sticker.
At present, a feature matching mode is often adopted to detect the sticker in the image, specifically, a feature (a depth feature or a defined manual feature or the like) of the sticker to be detected is obtained, a full-image sliding window traversal mode or the like is adopted in the image to be detected, an area matched with the feature of the sticker is searched, the area is used as an area where the sticker appears in the image, and therefore information such as the position, the size and the like of the sticker is determined.
However, in the current scheme, each sticker to be detected needs to be matched once, and under the condition that the types of stickers are many, a large amount of computing resource consumption is generated and a long time is consumed.
Disclosure of Invention
The embodiment of the disclosure provides a training method and device for a target map detection model, a map detection method and device, electronic equipment, a computer storage medium and a computer program product, so as to solve the problems that a large amount of computing resources are consumed and a long time is consumed in the related art.
In a first aspect, an embodiment of the present disclosure provides a method for training a target map detection model, where the method includes:
obtaining a plurality of mapping materials and a plurality of sample images;
adding the mapping material into the sample image to obtain a target sample image containing the mapping material;
training an initial mapping detection model according to the target sample image and a first position of a mapping material included in the target sample image to obtain a target mapping detection model, wherein the target mapping detection model is used for detecting the position of the mapping material included in the image to be detected.
In an alternative embodiment, the adding the mapping material to the sample image to obtain a target sample image containing mapping material includes:
and selecting a random number of mapping materials from all mapping materials for each sample image, adding the mapping materials into the sample image to obtain a plurality of target sample images containing the mapping materials, wherein the mapping materials have random sizes, and the mapping materials are added at random positions in the sample image.
In an optional implementation manner, the training an initial mapping detection model according to the target sample image and the position of the mapping material included in the target sample image to obtain a target mapping detection model includes:
and training the initial mapping detection model according to the target sample image, the first position of the mapping material included in the target sample image and the first size of the mapping material to obtain the target mapping detection model, wherein the target mapping detection model is used for detecting the position and the size of the mapping material included in the image to be detected.
In an optional implementation, the training the initial mapping detection model according to the target sample image, the first position of the mapping material included in the target sample image, and the first size of the mapping material, and obtaining the target mapping detection model includes:
training the initial mapping detection model through multiple rounds of iterative training operation according to the target sample image, the first position and the first size to obtain the target mapping detection model;
one of the iterative training operations comprises:
inputting the target sample image into the initial mapping detection model to obtain a second position and a second size output by the initial mapping detection model;
calculating a loss value based on the second position, the second size, the first position, and the first size;
and adjusting parameters of the initial mapping detection model according to the loss value and a preset loss function.
In an alternative embodiment, said selecting a random number of mapping materials from all mapping materials for each of said sample images comprises:
determining the random number of the mapping materials to be selected aiming at each sample image;
under the condition of hitting the first probability value, selecting the random number of the mapping materials which are different from each other;
selecting the random number of the different map materials according to a first probability value;
or selecting the random number of the mapping materials according to a second probability value, and determining the number of the same mapping materials in the selected mapping materials according to a preset same mapping ratio; the sum of the first probability value and the second probability value is 1.
In an alternative embodiment, the random number ranges from 1 to 3; the random size ranges from 10% of the sample image size to 50% of the sample image size.
In an alternative embodiment, the mapping material is added at random locations in the sample image, including: any corner point of the mapping material is added at a random position in the sample image, or the center point of the mapping material is added at a random position in the sample image.
In a second aspect, an embodiment of the present disclosure further provides a map detection method, where the method includes:
acquiring an image to be detected;
inputting the image to be detected into a target map detection model to obtain the position of a map material in the image to be detected output by the target map detection model;
the target map detection model is obtained by training the training method of the target map detection model.
In a third aspect, an embodiment of the present disclosure further provides a device for training a target map detection model, where the device includes:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire a plurality of map materials and a plurality of sample images;
the adding module is configured to add the mapping material into the sample image to obtain a target sample image containing the mapping material;
the training module is configured to train an initial mapping detection model according to the target sample image and a first position of a mapping material included in the target sample image to obtain a target mapping detection model, and the target mapping detection model is used for detecting the position of the mapping material included in the image to be detected.
In an alternative embodiment, the adding module includes:
the adding sub-module is configured to select a random number of mapping materials from all the mapping materials for each sample image, add the mapping materials into the sample image, and obtain a plurality of target sample images containing the mapping materials, wherein the mapping materials have random sizes, and the mapping materials are added at random positions in the sample image.
In an alternative embodiment, the training module comprises:
the training sub-module is configured to train the initial mapping detection model according to the target sample image, a first position of a mapping material included in the target sample image, and a first size of the mapping material, so as to obtain the target mapping detection model, and the target mapping detection model is used for detecting the position and the size of the mapping material included in the image to be detected.
In an alternative embodiment, the training submodule includes:
an iteration unit configured to train the initial mapping detection model through multiple rounds of iterative training operations according to the target sample image, the first position, and the first size, so as to obtain the target mapping detection model;
one of the iterative training operations comprises:
inputting the target sample image into the initial mapping detection model to obtain a second position and a second size output by the initial mapping detection model;
calculating a loss value based on the second position, the second size, the first position, and the first size;
and adjusting parameters of the initial mapping detection model according to the loss value and a preset loss function.
In an alternative embodiment, the adding sub-module includes:
a first determination unit configured to determine, for each of the sample images, a random number of map materials to be selected;
a second determination unit configured to select the random number of mutually different map materials with a first probability value;
the third determining unit is configured to select the random number of the mapping materials according to a second probability value, and determine the number of the same mapping materials in the selected mapping materials according to a preset same mapping ratio; the sum of the first probability value and the second probability value is 1.
In an alternative embodiment, the random number ranges from 1 to 3; the random size ranges from 10% of the sample image size to 50% of the sample image size.
In an alternative embodiment, the mapping material is added at random locations in the sample image, including: any corner point of the mapping material is added at a random position in the sample image, or the center point of the mapping material is added at a random position in the sample image.
In a fourth aspect, an embodiment of the present disclosure further provides a map detection apparatus, where the apparatus includes:
the image acquisition module is configured to acquire an image to be detected;
the detection module is configured to input the image to be detected into a target map detection model to obtain the position of a map material included in the image to be detected output by the target map detection model;
the target map detection model is obtained by training the training method of the target map detection model.
In a fifth aspect, embodiments of the present disclosure also provide an electronic device, including a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method described above.
In a sixth aspect, the disclosed embodiments also provide a storage medium, where instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the above-mentioned method.
In a seventh aspect, the disclosed embodiments also provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method described above is implemented.
In the disclosed embodiment, the method comprises the following steps: obtaining a plurality of mapping materials and a plurality of sample images; adding the map material into the sample image to obtain a target sample image containing the map material; training an initial mapping detection model according to the target sample image and the first position of the mapping material included in the target sample image to obtain a target mapping detection model, wherein the target mapping detection model is used for detecting the position of the mapping material included in the image to be detected. According to the method, each chartlet material does not need to be processed independently, the positions of the chartlet materials in the image to be detected are identified and detected only through a trained target chartlet detection model in an application link, the whole process can be suitable for detection scenes of a large number of different chartlets, the resource demand and time consumption in the chartlet detection process are reduced, the detection efficiency is improved, and the response time is shortened.
The foregoing description is only an overview of the technical solutions of the present disclosure, and the embodiments of the present disclosure are described below in order to make the technical means of the present disclosure more clearly understood and to make the above and other objects, features, and advantages of the present disclosure more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the disclosure. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flowchart illustrating steps of a method for training a target map detection model according to an embodiment of the present disclosure;
FIG. 2 is a mapping application scene graph provided by an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating steps of a method for detecting a map according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating steps of another method for training a target map detection model according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a training apparatus for a target map detection model according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of a map detection apparatus provided in an embodiment of the present disclosure;
FIG. 7 is a logical block diagram of an electronic device of one embodiment of the present disclosure;
fig. 8 is a logic block diagram of an electronic device of another embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flowchart illustrating steps of a training method for a target map detection model according to an embodiment of the present disclosure, and as shown in fig. 1, the method may include:
step 101, obtaining a plurality of mapping materials and a plurality of sample images.
In the embodiment of the present disclosure, the mapping material is a mapping special effect, specifically, the mapping special effect may be a planar image with personalized content, and the planar image may be used to cover other image surfaces, and the sample image may be an image whose surface is not covered with the mapping material, that is, a clean image.
For example, referring to fig. 2, which illustrates a mapping application scene graph provided by an embodiment of the present disclosure, wherein a mapping material 10 may be covered on a surface of a sample image 20, so as to add a mapping special effect to the sample image 20, when a user adds the mapping special effect, the position of the mapping material 10 in the sample image 20 may be arbitrarily adjusted according to actual requirements, and a size of the mapping material 10 may be arbitrarily adjusted. The sample image 20 may be an independent image or a video frame in a video.
In the embodiment of the present disclosure, the mapping material and the sample image may be directly obtained from a local image database at the server, the mapping material and the sample image may also be images created by self-editing, and in addition, the mapping material and the sample image may also be obtained by downloading from an internet gallery, which is not limited in the embodiment of the present disclosure.
And 102, adding the mapping material into the sample image to obtain a target sample image containing the mapping material.
In the embodiment of the present disclosure, in order to train and obtain a target mapping detection model having a function of identifying positions of mapping materials in an image to be detected, training data needs to be prepared first, a large number of target sample images having mapping materials need to be prepared based on the training data, and a first position of the mapping materials included in each target sample image in the target sample image is recorded as annotation information during training.
Specifically, the embodiment of the present disclosure may add the mapping material prepared in step 101 to the sample image, obtain a target sample image containing the mapping material as training data, and record information such as the number, position, and size of the mapping material added to each sample image as annotation information. The mapping material may be added to the sample image in a random number, a random position, and a random size, and the mapping material may be added to the sample image in a set number, a set position, and a set size, which is not limited in the embodiment of the present disclosure. In the embodiment of the disclosure, the terminal for adding the chartlet material into the sample image can automatically realize chartlet adding operation and automatically record the adding position of the chartlet, so that the building process of training data is not required to be manually participated, and the labor cost is reduced.
Step 103, training an initial mapping detection model according to the target sample image and a first position of a mapping material included in the target sample image to obtain a target mapping detection model, wherein the target mapping detection model is used for detecting the position of the mapping material included in the image to be detected.
For the training process, a target sample image comprising a mapping material and a first position of the mapping material in the target sample image, which is included in the target sample image, are used as training data based on a deep learning technology, an initial mapping detection model is trained, the target mapping detection model can be obtained through training after multiple rounds of iterative training, the target mapping detection model can take the image to be detected as input and the position of the mapping material in the image to be detected as output, and therefore the purpose of identifying the position of the mapping material in the image to be detected based on model detection is achieved. The initial map detection model may be a machine learning model, such as a deep learning model.
Specifically, during training, one round of iteration operation can input one or more target sample images including the map material into the initial map detection model, the loss value is calculated according to the output value of the initial map detection model and the real first position of the map material, the model parameters of the initial map detection model are adjusted according to the loss value and a preset loss function, after a plurality of rounds of iteration training, the output value of the initial map detection model can be made to accord with a training target, and the trained initial map detection model is used as the target map detection model.
In the embodiment of the disclosure, the target chartlet detection model with the function of identifying the position of the chartlet material in the image to be detected can be trained to detect the chartlet material in the image to be detected in a model detection manner, compared with the method of detecting the chartlet through feature matching in the related art, the method of the embodiment of the disclosure does not need to process each chartlet material independently, and aims at different characteristics of a large number of different chartlet materials, the positions of the chartlet materials in the image to be detected are identified and detected only through a trained target chartlet detection model in the application link after the positions under different characteristics of the chartlet materials are identified and detected in the training process, the whole process can be suitable for detection scenes of a large number of different chartlets, the resource demand and time consumption in the chartlet detection process are reduced, the detection efficiency is improved, and the response time is shortened.
To sum up, the training method for the target map detection model provided by the embodiment of the present disclosure includes: obtaining a plurality of mapping materials and a plurality of sample images; adding the map material into the sample image to obtain a target sample image containing the map material; training an initial mapping detection model according to the target sample image and the first position of the mapping material included in the target sample image to obtain a target mapping detection model, wherein the target mapping detection model is used for detecting the position of the mapping material included in the image to be detected. According to the method, each chartlet material does not need to be processed independently, the positions of the chartlet materials in the image to be detected are identified and detected only through a trained target chartlet detection model in an application link, the whole process can be suitable for detection scenes of a large number of different chartlets, the resource demand and time consumption in the chartlet detection process are reduced, the detection efficiency is improved, and the response time is shortened.
Fig. 3 is a flowchart illustrating steps of a map detection method according to an embodiment of the present disclosure, where as shown in fig. 3, the method may include:
step 201, an image to be detected is obtained.
In the embodiment of the present disclosure, the image to be detected may be an image containing a mapping material, or may be a clean image not containing a mapping material.
Step 202, inputting the image to be detected into a target map detection model, and obtaining the position of a map material included in the image to be detected and output by the target map detection model.
The target map detection model is obtained by training through the training method shown in the embodiment of fig. 1.
In the embodiment of the disclosure, the positions of the mapping materials in the image to be detected can be detected in a model detection manner by training the target mapping detection model with the function of identifying the positions of the mapping materials in the image to be detected, compared with the manner of detecting the mapping by feature matching in the related art, the manner of the embodiment of the disclosure does not need to process each mapping material independently, and aims at different characteristics of a large number of different mapping materials, the positions of the chartlet materials in the image to be detected are identified and detected only through a trained target chartlet detection model in the application link after the positions under different characteristics of the chartlet materials are identified and detected in the training process, the whole process can be suitable for detection scenes of a large number of different chartlets, the resource demand and time consumption in the chartlet detection process are reduced, the detection efficiency is improved, and the response time is shortened.
To sum up, a method for detecting a map provided by the embodiment of the present disclosure includes: acquiring an image to be detected; and inputting the image to be detected into a target map detection model to obtain the position of a map material in the image to be detected, which is output by the target map detection model. According to the method, each chartlet material does not need to be processed independently, the positions of the chartlet materials in the image to be detected are identified and detected only through a trained target chartlet detection model in an application link, the whole process can be suitable for detection scenes of a large number of different chartlets, the resource demand and time consumption in the chartlet detection process are reduced, the detection efficiency is improved, and the response time is shortened.
Fig. 4 is a flowchart illustrating steps of another method for training a target map detection model according to an embodiment of the present disclosure, where as shown in fig. 4, the method may include:
step 301, obtaining a plurality of mapping materials and a plurality of sample images.
This step may specifically refer to step 101, which is not described herein again.
Step 302, selecting a random number of mapping materials from all mapping materials of each sample image, adding the mapping materials into the sample image, and obtaining a plurality of target sample images containing the mapping materials, wherein the mapping materials have random sizes, and the mapping materials are added at random positions in the sample image.
In an actual scene, a large number of chartlet materials exist, and different users have different chartlet adding habits, that is, different users have different requirements for chartlet adding positions, chartlet sizes, chartlet numbers and the like, so in the actual scene, a large number of different combinations of chartlets and sample images exist, which makes the combinations of the chartlets and the sample images form randomness. The large number of target sample images obtained in the mode can be attached to randomness formed by combination of the images in the actual scene and the sample images (namely, the images can have any size, number and arrangement position in the actual situation), so that the habit of a user can be simulated more comprehensively, and the quality of the subsequent training process is improved.
Specifically, the random function with a limited range may generate a random number according to the limited range, for example, for the selected random number of the mapping materials, assuming that the number of all the mapping materials is a, the range of the random number generated by the first random function may be set to be 1 to a, so that the first random function may generate a random number in the range of 1 to a, and the mapping materials may be selected according to the random number.
The mapping material may have a random size, and assuming that the maximum size of the sample image is B and the minimum size is C, the range of the random numbers generated by the second random function may be set to be C to B, so that the second random function may generate a random size in the range of C to B, and the size of the mapping material may be set according to the random size.
And if the mapping material can have random positions, setting the range of random numbers generated by the third random function according to the upper limit and the lower limit of the position coordinate value in the sample image, so that the third random function can generate random positions in the set range, and setting the positions of the mapping material according to the random positions.
Optionally, the random number ranges from 1 to 3; the random size ranges from 10% of the sample image size to 50% of the sample image size.
Preferably, the range of the random number generating random function may be set to 1 to 3, and the range of the random size generating random function may be 10% of the sample image size to 50% of the sample image size; the set combination can be more fit with the actual scene, the combination of the uncommon mapping and the sample image is reduced, the precision of the training data is improved, and the redundant data is reduced.
Optionally, the mapping material is added at a random position in the sample image, including: any corner point of the mapping material is added at a random position in the sample image, or the center point of the mapping material is added at a random position in the sample image.
Further, the position of the mapping material may be based on a setting position of a corner point (e.g., an upper left corner point) of the mapping material, or may also be based on a setting position of a center point of the mapping material, or may also be based on a setting position of any reference point, which is not limited in this disclosure.
Optionally, step 302 may specifically include:
substep 3021, for each of the sample images, determining a random number of mapping material to be selected.
Substep 3022 selects the random number of mutually different mapping materials at a first probability value.
Or selecting the random number of the mapping materials according to a second probability value, and determining the number of the same mapping materials in the selected mapping materials according to a preset same mapping ratio; the sum of the first probability value and the second probability value is 1.
For real scenes, due to the difference in demand and user habits, in some cases, users are accustomed to adding multiple identical chartlet materials to the sample image, e.g., the user adds three identical smiley face chartlets to increase emotional expression. In other cases, users are used to add multiple different maps to increase the degree of personalization.
Therefore, in order to make the training data more fit to the actual situation, the embodiment of the present disclosure may determine, when constructing each target sample image, whether the multiple pieces of mapping materials added to the target sample image have the same mapping material according to a set first probability value, determine, when the first probability value is hit, that the multiple pieces of mapping materials added are all different mapping materials, when the second probability value (a difference value between 1 and the first probability value) is hit, determine that the multiple pieces of mapping materials added have the same mapping material, and determine the number of the same mapping materials according to a preset ratio of the same mapping. By the method, the authenticity of the training data is further increased, and the quality of the training data is improved. It should be noted that the first probability value and the same map ratio value may be set according to actual needs, and the embodiment of the present disclosure does not limit this.
For example, assume that the first probability value is 70%; the ratio of the same mapping is as follows: 1/3 for the same number/total; the second probability value is 30%. That is, for each sample image, a probability hit operation may be performed, and if a probability of hit is 70%, a random number of different mapping materials are selected and added to the sample image. If a 30% probability is hit, it is determined from the random number (e.g., 9) that there are 3 identical pieces of map material, and the 9 pieces of map material are added to the sample image.
Step 303, training the initial mapping detection model according to the target sample image, the first position of the mapping material included in the target sample image, and the first size of the mapping material, to obtain the target mapping detection model, where the target mapping detection model is used to detect the position and size of the mapping material included in the image to be detected.
The training data is a plurality of target sample images, each target sample image has corresponding labeling information, the labeling information can be the number, the first position and the first size of the mapping materials added to the target sample images, the target sample images can be input into an initial mapping detection model in the training process, loss values are calculated based on output values of the initial mapping detection model and labeling information of the target sample images, model parameters of the initial mapping detection model are adjusted according to the loss values and a preset loss function, a target mapping detection model is obtained, and the trained target mapping detection model can be used for detecting the positions and the sizes of the mapping materials included in the images to be detected.
Optionally, step 303 may specifically include:
substep 3031, training the initial chartlet detection model through multiple rounds of iterative training operations according to the target sample image, the first position and the first size to obtain the target chartlet detection model.
In the embodiment of the disclosure, a target chartlet detection model can be obtained through multiple rounds of iterative training operations, in each iterative training operation, model parameters of an initial chartlet detection model can be optimized, in order to achieve the purpose of obtaining better model parameters through training, multiple rounds of iterative training operations are required to enable the model parameters to reach expectations, and until the preset times of iterative training operations are reached, the model parameters can be considered to reach the expectations, and the iteration is terminated, so that the target chartlet detection model is obtained for use.
In addition, the preset number of times that the number of iterative training operations finally reaches can be set according to actual requirements. In one mode, after the preset number of times is reached, similarity calculation is performed on the output value of the initial mapping detection model and a standard value, and when the similarity is greater than a set similarity threshold (e.g., 90%), a training index is considered to be completed, so that a target mapping detection model is obtained; and under the condition that the similarity is smaller than the set similarity threshold, the training index is considered to be not finished, and at the moment, additional iterative training operation can be added until the similarity meets the index requirement. In another mode, after the preset number of times is reached, whether the training index is completed or not can be judged by determining whether the loss value of the target map detection model is within the preset range or not.
The one-time iterative training operation comprises the following steps:
step A1, inputting the target sample image into the initial map detection model, and obtaining a second position and a second size output by the initial map detection model.
Step A2, calculating a loss value based on the second position, the second size, the first position and the first size.
And A3, adjusting the parameters of the initial map detection model according to the loss value and a preset loss function.
In the embodiment of the present disclosure, the loss value is calculated according to a difference between the first position and the second position output by the initial mapping detection model, and a difference between the first size and the second size output by the initial mapping detection model. The loss function (loss function) is a function that maps the value of a random variable to a non-negative real number to represent the "risk" or "loss" of the random event. In application, the loss function is usually associated with the optimization problem as a learning criterion, i.e. the model is solved and evaluated by minimizing the loss function. In the process of training the model, a suitable loss function can be selected according to actual requirements, and the embodiment of the disclosure does not limit the specifically selected loss function.
In the training process, a random gradient descent algorithm can be specifically adopted to optimize the training process, wherein the random gradient descent algorithm is to randomly extract a group of data (a target sample image and label information corresponding to the target sample image) from training data in a plurality of iteration operations, update the data according to the gradient after training, extract a group of data again and update the data again, and under the condition that the training data volume is extremely large, a model with a loss value within an acceptable range can be obtained without training all samples, wherein random means that the input data are randomly disordered in each iteration process, and the aim is to effectively reduce the problem of parameter updating offset caused by input data. And after each successive iteration operation, the loss value between the output result of the discriminator and the true value is reduced in a gradient manner, and the training of the target map detection model can be considered to be finished under the condition that the loss value is within the preset range.
To sum up, the training method for the target map detection model provided by the embodiment of the present disclosure includes: obtaining a plurality of mapping materials and a plurality of sample images; adding the map material into the sample image to obtain a target sample image containing the map material; training an initial mapping detection model according to the target sample image and the first position of the mapping material included in the target sample image to obtain a target mapping detection model, wherein the target mapping detection model is used for detecting the position of the mapping material included in the image to be detected. According to the method, each chartlet material does not need to be processed independently, the positions of the chartlet materials in the image to be detected are identified and detected only through a trained target chartlet detection model in an application link, the whole process can be suitable for detection scenes of a large number of different chartlets, the resource demand and time consumption in the chartlet detection process are reduced, the detection efficiency is improved, and the response time is shortened.
Fig. 5 is a block diagram of a training apparatus for a target map detection model according to an embodiment of the present disclosure, as shown in fig. 5, including: an acquisition module 401, an adding module 402, and a training module 403.
An obtaining module 401 configured to obtain a plurality of chartlet materials and a plurality of sample images;
an adding module 402, configured to add the mapping material to the sample image, to obtain a target sample image containing the mapping material;
a training module 403, configured to train an initial mapping detection model according to the target sample image and a first position of a mapping material included in the target sample image, to obtain a target mapping detection model, where the target mapping detection model is used to detect a position of the mapping material included in an image to be detected.
In an optional implementation, the adding module includes:
the adding sub-module is configured to select a random number of mapping materials from all the mapping materials for each sample image, add the mapping materials into the sample image, and obtain a plurality of target sample images containing the mapping materials, wherein the mapping materials have random sizes, and the mapping materials are added at random positions in the sample image.
In an alternative implementation, the training module includes:
the training sub-module is configured to train the initial mapping detection model according to the target sample image, a first position of a mapping material included in the target sample image, and a first size of the mapping material, so as to obtain the target mapping detection model, and the target mapping detection model is used for detecting the position and the size of the mapping material included in the image to be detected.
In an alternative implementation, the training submodule includes:
an iteration unit configured to train the initial mapping detection model through multiple rounds of iterative training operations to obtain the target mapping detection model, wherein the target sample image, the first position and the first size are obtained;
one of the iterative training operations comprises:
inputting the target sample image into the initial mapping detection model to obtain a second position and a second size output by the initial mapping detection model;
calculating a loss value based on the second position, the second size, the first position, and the first size;
and adjusting parameters of the initial mapping detection model according to the loss value and a preset loss function.
In an alternative implementation, the adding sub-module includes:
a first determination unit configured to determine, for each of the sample images, a random number of map materials to be selected;
a second determination unit configured to select the random number of mutually different map materials in a case where the first probability value is hit;
the third determining unit is configured to select the random number of the map materials under the condition of hitting the second probability value, and determine the number of the same map materials in the selected map materials according to a preset same map ratio; the sum of the first probability value and the second probability value is 1.
In an alternative implementation, the random number ranges from 1 to 3; the random size ranges from 10% of the sample image size to 50% of the sample image size.
In an alternative implementation, any corner point of the mapping material is added at a random position in the sample image, or a center point of the mapping material is added at a random position in the sample image.
To sum up, the training apparatus for a target map detection model provided by the embodiment of the present disclosure includes: obtaining a plurality of mapping materials and a plurality of sample images; adding the map material into the sample image to obtain a target sample image containing the map material; training an initial mapping detection model according to the target sample image and the first position of the mapping material included in the target sample image to obtain a target mapping detection model, wherein the target mapping detection model is used for detecting the position of the mapping material included in the image to be detected. According to the method, each chartlet material does not need to be processed independently, the positions of the chartlet materials in the image to be detected are identified and detected only through a trained target chartlet detection model in an application link, the whole process can be suitable for detection scenes of a large number of different chartlets, the resource demand and time consumption in the chartlet detection process are reduced, the detection efficiency is improved, and the response time is shortened.
Fig. 6 is a block diagram of a map detection apparatus according to an embodiment of the present disclosure, as shown in fig. 6, including: an image acquisition module 501 and a detection module 502.
An image acquisition module 501 configured to acquire an image to be detected;
a detection module 502 configured to input the image to be detected into a target map detection model, so as to obtain a position of a map material included in the image to be detected output by the target map detection model;
wherein, the target map detection model is obtained by training the device according to the embodiment of fig. 5.
To sum up, the apparatus for detecting a map provided by the embodiment of the present disclosure includes: acquiring an image to be detected; and inputting the image to be detected into a target map detection model to obtain the position of a map material in the image to be detected, which is output by the target map detection model. According to the method, each chartlet material does not need to be processed independently, the positions of the chartlet materials in the image to be detected are identified and detected only through a trained target chartlet detection model in an application link, the whole process can be suitable for detection scenes of a large number of different chartlets, the resource demand and time consumption in the chartlet detection process are reduced, the detection efficiency is improved, and the response time is shortened.
Fig. 7 is a block diagram illustrating an electronic device 600 according to an example embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an interface to input/output (I/O) 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is used to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, multimedia, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of electronic device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen that provides an output interface between the electronic device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense demarcations of a touch or slide action, but also detect a duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 600 is in an operation mode, such as a photographing mode or a multimedia mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is used to output and/or input audio signals. For example, the audio component 610 may include a Microphone (MIC) for receiving external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor component 614 may detect an open/closed state of the electronic device 600, the relative positioning of components, such as a display and keypad of the electronic device 600, the sensor component 614 may also detect a change in the position of the electronic device 600 or a component of the electronic device 600, the presence or absence of user contact with the electronic device 600, orientation or acceleration/deceleration of the electronic device 600, and a change in the temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is operable to facilitate wired or wireless communication between the electronic device 600 and other devices. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for implementing a method for training a target map detection model provided by an embodiment of the present disclosure.
In an exemplary embodiment, a non-transitory computer storage medium including instructions, such as the memory 604 including instructions, executable by the processor 620 of the electronic device 600 to perform the above-described method is also provided. For example, the non-transitory storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 8 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. For example, the electronic device 700 may be provided as a server. Referring to fig. 8, electronic device 700 includes a processing component 722 that further includes one or more processors, and memory resources, represented by memory 732, for storing instructions, such as applications, that are executable by processing component 722. The application programs stored in memory 732 may include one or more modules that each correspond to a set of instructions. Further, the processing component 722 is configured to execute instructions to perform a method for training a target map detection model provided by an embodiment of the present disclosure.
The electronic device 700 may also include a power component 726 that is configured to perform power management of the electronic device 700, a wired or wireless network interface 750 that is configured to connect the electronic device 700 to a network, and an input output (I/O) interface 758. The electronic device 700 may operate based on an operating system stored in memory 732, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Embodiments of the present disclosure also provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for training the target map detection model is implemented.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1.一种目标贴图检测模型的训练方法,其特征在于,所述方法包括:1. A training method for a target texture detection model, wherein the method comprises: 获取多个贴图素材以及多个样本图像;Get multiple texture materials and multiple sample images; 将所述贴图素材添加至所述样本图像中,得到包含贴图素材的目标样本图像;adding the texture material to the sample image to obtain a target sample image containing the texture material; 根据所述目标样本图像,以及所述目标样本图像包括的贴图素材在所述目标样本图像中的第一位置,训练初始贴图检测模型,得到目标贴图检测模型,所述目标贴图检测模型用于检测待检测图像中包括的贴图素材的位置。According to the target sample image and the first position of the texture material included in the target sample image in the target sample image, an initial texture detection model is trained to obtain a target texture detection model, and the target texture detection model is used to detect The location of the texture material included in the image to be detected. 2.根据权利要求1所述的方法,其特征在于,所述将所述贴图素材添加至所述样本图像中,得到包含贴图素材的目标样本图像,包括:2 . The method according to claim 1 , wherein the adding the texture material to the sample image to obtain a target sample image containing the texture material comprises: 2 . 针对每个所述样本图像,从所有贴图素材中选取随机数量的贴图素材添加至所述样本图像中,得到多个包含贴图素材的目标样本图像,所述贴图素材具有随机尺寸,所述贴图素材被添加在所述样本图像中的随机位置。For each sample image, a random number of texture materials are selected from all texture materials and added to the sample image to obtain a plurality of target sample images including texture materials, the texture materials have random sizes, and the texture materials are added at random locations in the sample image. 3.根据权利要求2所述的方法,其特征在于,所述根据所述目标样本图像,以及所述目标样本图像包括的贴图素材的位置,训练初始贴图检测模型,得到目标贴图检测模型,包括:3 . The method according to claim 2 , wherein the initial texture detection model is trained according to the target sample image and the position of the texture material included in the target sample image, and the target texture detection model is obtained, comprising: 4 . : 根据所述目标样本图像、所述目标样本图像包括的贴图素材在所述目标样本图像中的第一位置,以及所述贴图素材的第一尺寸,训练所述初始贴图检测模型,得到所述目标贴图检测模型,所述目标贴图检测模型用于检测待检测图像中包括的贴图素材的位置和尺寸。According to the target sample image, the first position of the texture material included in the target sample image in the target sample image, and the first size of the texture material, the initial texture detection model is trained to obtain the target A texture detection model, the target texture detection model is used to detect the position and size of texture materials included in the image to be detected. 4.根据权利要求3所述的方法,其特征在于,所述根据所述目标样本图像、所述目标样本图像包括的贴图素材在所述目标样本图像中的第一位置,以及所述贴图素材的第一尺寸,训练所述初始贴图检测模型,得到所述目标贴图检测模型包括:4 . The method according to claim 3 , wherein the method according to the target sample image, the first position of the texture material included in the target sample image in the target sample image, and the texture material The first size of , training the initial texture detection model, and obtaining the target texture detection model includes: 根据所述目标样本图像、所述第一位置,以及所述第一尺寸,通过多轮迭代训练操作训练所述初始贴图检测模型,得到所述目标贴图检测模型;According to the target sample image, the first position, and the first size, the initial texture detection model is trained through multiple rounds of iterative training operations to obtain the target texture detection model; 一次所述迭代训练操作包括:One of the iterative training operations includes: 将所述目标样本图像输入所述初始贴图检测模型,得到所述初始贴图检测模型输出的第二位置、第二尺寸;Inputting the target sample image into the initial texture detection model to obtain a second position and a second size output by the initial texture detection model; 根据所述第二位置、所述第二尺寸、所述第一位置和所述第一尺寸,计算损失值;calculating a loss value based on the second position, the second size, the first position and the first size; 根据所述损失值,调整所述初始贴图检测模型的参数。According to the loss value, the parameters of the initial texture detection model are adjusted. 5.一种贴图检测方法,其特征在于,所述方法包括:5. A texture detection method, characterized in that the method comprises: 获取待检测图像;Obtain the image to be detected; 将所述待检测图像输入目标贴图检测模型,得到所述目标贴图检测模型输出的所述待检测图像中包括的贴图素材的位置;Inputting the to-be-detected image into a target texture detection model, to obtain the position of the texture material included in the to-be-detected image output by the target texture detection model; 其中,所述目标贴图检测模型是由权利要求1至4任一项所述的方法训练得到的。Wherein, the target texture detection model is trained by the method of any one of claims 1 to 4. 6.一种目标贴图检测模型的训练装置,其特征在于,所述装置包括:6. A training device for a target map detection model, wherein the device comprises: 获取模块,被配置为获取多个贴图素材以及多个样本图像;an acquisition module, configured to acquire multiple texture materials and multiple sample images; 添加模块,被配置为将所述贴图素材添加至所述样本图像中,得到包含贴图素材的目标样本图像;The adding module is configured to add the texture material to the sample image to obtain a target sample image including the texture material; 训练模块,被配置为根据所述目标样本图像,以及所述目标样本图像包括的贴图素材在所述目标样本图像中的第一位置,训练初始贴图检测模型,得到目标贴图检测模型,所述目标贴图检测模型用于检测待检测图像中包括的贴图素材的位置。The training module is configured to train an initial texture detection model according to the target sample image and the first position of the texture material included in the target sample image in the target sample image to obtain a target texture detection model, the target The texture detection model is used to detect the position of texture material included in the image to be detected. 7.一种贴图检测装置,其特征在于,所述装置包括:7. A texture detection device, characterized in that the device comprises: 图像获取模块,被配置为获取待检测图像;an image acquisition module, configured to acquire an image to be detected; 检测模块,被配置为将所述待检测图像输入目标贴图检测模型,得到所述目标贴图检测模型输出的所述待检测图像中包括的贴图素材的位置;a detection module, configured to input the to-be-detected image into a target texture detection model, to obtain the position of the texture material included in the to-be-detected image output by the target texture detection model; 其中,所述目标贴图检测模型是由权利要求6所述的装置训练得到的。Wherein, the target texture detection model is obtained by training the device of claim 6 . 8.一种电子设备,其特征在于,包括:处理器;8. An electronic device, comprising: a processor; 用于存储所述处理器可执行指令的存储器;a memory for storing the processor-executable instructions; 其中,所述处理器被配置为执行所述指令,以实现如权利要求1至5中任一项所述的方法。wherein the processor is configured to execute the instructions to implement the method of any of claims 1-5. 9.一种计算机存储介质,其特征在于,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如权利要求1至5中任一项所述的方法。9. A computer storage medium, characterized in that, when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, the electronic device is enabled to execute the method described in any one of claims 1 to 5. method described. 10.一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-5任一项所述的方法。10. A computer program product, comprising a computer program, characterized in that, when the computer program is executed by a processor, the method according to any one of claims 1-5 is implemented.
CN202111055638.1A 2021-09-09 2021-09-09 Training of target texture detection model, texture detection method and device Pending CN113870195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111055638.1A CN113870195A (en) 2021-09-09 2021-09-09 Training of target texture detection model, texture detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111055638.1A CN113870195A (en) 2021-09-09 2021-09-09 Training of target texture detection model, texture detection method and device

Publications (1)

Publication Number Publication Date
CN113870195A true CN113870195A (en) 2021-12-31

Family

ID=78995067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111055638.1A Pending CN113870195A (en) 2021-09-09 2021-09-09 Training of target texture detection model, texture detection method and device

Country Status (1)

Country Link
CN (1) CN113870195A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155374A (en) * 2022-02-09 2022-03-08 深圳爱莫科技有限公司 Ice cream image training method, detection method and processing equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108236784A (en) * 2018-01-22 2018-07-03 腾讯科技(深圳)有限公司 The training method and device of model, storage medium, electronic device
WO2019029406A1 (en) * 2017-08-10 2019-02-14 腾讯科技(深圳)有限公司 Emoji displaying method and apparatus, computer readable storage medium, and terminal
CN110084304A (en) * 2019-04-28 2019-08-02 北京理工大学 A kind of object detection method based on generated data collection
CN112070137A (en) * 2020-08-27 2020-12-11 腾讯科技(深圳)有限公司 Training data set generation method, target object detection method and related equipment
CN112528977A (en) * 2021-02-10 2021-03-19 北京优幕科技有限责任公司 Target detection method, target detection device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019029406A1 (en) * 2017-08-10 2019-02-14 腾讯科技(深圳)有限公司 Emoji displaying method and apparatus, computer readable storage medium, and terminal
CN108236784A (en) * 2018-01-22 2018-07-03 腾讯科技(深圳)有限公司 The training method and device of model, storage medium, electronic device
CN110084304A (en) * 2019-04-28 2019-08-02 北京理工大学 A kind of object detection method based on generated data collection
CN112070137A (en) * 2020-08-27 2020-12-11 腾讯科技(深圳)有限公司 Training data set generation method, target object detection method and related equipment
CN112528977A (en) * 2021-02-10 2021-03-19 北京优幕科技有限责任公司 Target detection method, target detection device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155374A (en) * 2022-02-09 2022-03-08 深圳爱莫科技有限公司 Ice cream image training method, detection method and processing equipment

Similar Documents

Publication Publication Date Title
CN108256555B (en) Image content identification method and device and terminal
EP3179408B1 (en) Picture processing method and apparatus, computer program and recording medium
CN106557768B (en) Method and device for recognizing characters in picture
US10534972B2 (en) Image processing method, device and medium
CN105845124B (en) Audio processing method and device
US20170154206A1 (en) Image processing method and apparatus
CN105631803B (en) The method and apparatus of filter processing
US11281363B2 (en) Method and device for setting identity image
CN109360197B (en) Image processing method and device, electronic equipment and storage medium
US10248855B2 (en) Method and apparatus for identifying gesture
CN112069951B (en) Video segment extraction method, video segment extraction device and storage medium
CN110764627B (en) Input method and device and electronic equipment
CN108921178B (en) Method and device for obtaining image blur degree classification and electronic equipment
CN112148980A (en) Item recommendation method, device, equipment and storage medium based on user click
CN110717399A (en) Face recognition method and electronic terminal equipment
CN111797262A (en) Poetry generation method and device, electronic equipment and storage medium
CN108460138A (en) Music recommends method, apparatus, equipment and storage medium
CN105163188A (en) Video content processing method, device and apparatus
CN111274444A (en) Method and device for generating video cover determination model and method and device for determining video cover
CN113870195A (en) Training of target texture detection model, texture detection method and device
CN106447747B (en) Image processing method and device
CN108027821B (en) Method and device for processing picture
CN111222041A (en) Shooting resource data acquisition method and device, electronic equipment and storage medium
CN107239490B (en) Method and device for naming face image and computer readable storage medium
CN113673603B (en) Element point matching method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination