WO2021244138A1 - 表盘生成方法、装置、电子设备和计算机可读存储介质 - Google Patents

表盘生成方法、装置、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2021244138A1
WO2021244138A1 PCT/CN2021/086409 CN2021086409W WO2021244138A1 WO 2021244138 A1 WO2021244138 A1 WO 2021244138A1 CN 2021086409 W CN2021086409 W CN 2021086409W WO 2021244138 A1 WO2021244138 A1 WO 2021244138A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
matched
target
feature
dial
Prior art date
Application number
PCT/CN2021/086409
Other languages
English (en)
French (fr)
Inventor
陈德银
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021244138A1 publication Critical patent/WO2021244138A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Definitions

  • This application relates to the field of computer technology, and in particular to a method, device, electronic device and computer-readable storage medium for generating a dial.
  • the user can select a favorite pattern as the dial.
  • the traditional dial generation method has the problem that the generated dial is inaccurate.
  • a method, device, electronic device, and computer-readable storage medium for generating a dial are provided.
  • a method for generating a dial plate includes:
  • Extracting features to be matched of the picture to be matched
  • a method for generating a dial plate includes:
  • Extracting features to be matched of the picture to be matched
  • the target picture is sent to a wearable device; the target picture is used to instruct the wearable device to acquire a time element, and a dial is generated based on the time element and the target picture.
  • An electronic device includes a memory and a processor, and a computer program is stored in the memory.
  • the processor causes the processor to perform the operation of the dial generation method described above.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the operation of the above-mentioned method is realized.
  • the above-mentioned dial generation method, device, electronic equipment, and computer-readable storage medium match the feature to be matched of the picture to be matched with the reference feature of the reference picture, so that based on the similarity between the picture to be matched and the reference picture, each A more accurate target picture is determined from the reference picture, and then the time element is obtained. Based on the time element and the target picture, a more accurate dial can be generated.
  • a method for generating a dial plate includes:
  • Extracting features to be matched of the picture to be matched
  • the target picture is sent to a wearable device; the target picture is used to instruct the wearable device to acquire a time element, and a dial is generated based on the time element and the target picture.
  • a dial generating device includes:
  • the picture to be matched acquiring module is used to acquire the picture to be matched
  • the feature extraction module is used to extract the features to be matched of the image to be matched;
  • a reference picture and reference feature acquisition module configured to acquire a reference picture and a reference feature of the reference picture
  • a matching module configured to respectively match the feature to be matched with the reference feature of each of the reference pictures, and determine the similarity between the picture to be matched and each of the reference pictures;
  • a target picture determining module configured to determine a target picture from each of the reference pictures based on the similarity between the picture to be matched and each of the reference pictures;
  • the dial generating module is configured to send the target picture to a wearable device; the target picture is used to instruct the wearable device to acquire a time element, and to generate a dial based on the time element and the target picture.
  • An electronic device includes a memory and a processor, and a computer program is stored in the memory.
  • the processor causes the processor to perform the operation of the dial generation method described above.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the operation of the above-mentioned method is realized.
  • the above-mentioned dial generation method, device, electronic equipment, and computer-readable storage medium match the feature to be matched of the picture to be matched with the reference feature of the reference picture, so that based on the similarity between the picture to be matched and the reference picture, each A more accurate target picture is determined from the reference picture, and the target picture is sent to the wearable device for the wearable device to generate a more accurate dial.
  • a method for generating a dial, applied to a wearable device includes:
  • the target picture is determined by the electronic device from each reference picture based on the similarity between the acquired picture to be matched and each acquired reference picture; the picture to be matched is compared with The similarity between each of the reference pictures is obtained by the electronic device matching the feature to be matched of the picture to be matched with the reference feature of each reference picture; the to be matched of the picture to be matched The feature is extracted by the electronic device from the picture to be matched;
  • a dial generating device applied to a wearable device including:
  • the target picture acquisition module is used to acquire the target picture sent by the electronic device; the target picture is determined by the electronic device from each reference picture based on the similarity between the acquired picture to be matched and each acquired reference picture The similarity between the picture to be matched and each of the reference pictures is obtained by the electronic device matching the feature to be matched of the picture to be matched with the reference feature of each reference picture; The feature to be matched of the picture to be matched is extracted by the electronic device from the picture to be matched;
  • the dial generating module is used to obtain the time element, and generate the dial based on the time element and the target picture.
  • a wearable device includes a memory and a processor, and a computer program is stored in the memory.
  • the processor causes the processor to perform the operation of the above-mentioned dial generation method.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the operation of the above-mentioned method is realized.
  • the above-mentioned dial generation method, device, wearable device, and computer-readable storage medium are used to obtain a target picture sent by an electronic device.
  • the target picture is determined by the electronic device from each reference picture based on the similarity between the picture to be matched and the reference picture Therefore, the determined target picture is more accurate; then the time element is obtained, and a more accurate dial is generated based on the time element and the target picture.
  • Fig. 1 is an application environment diagram of a dial generation method in an embodiment.
  • Fig. 2 is a flowchart of a method for generating a dial in an embodiment.
  • Fig. 3 is a flowchart of an operation to determine a target picture in an embodiment.
  • Fig. 4 is a flowchart of the operation to determine the region to be matched in an embodiment.
  • Fig. 5 is a flowchart of a method for generating a dial in another embodiment.
  • Fig. 6 is a structural block diagram of a dial generating device in an embodiment.
  • Fig. 7 is a structural block diagram of a dial generating device in another embodiment.
  • Fig. 8 is a structural block diagram of a dial generating device in another embodiment.
  • Fig. 9 is a schematic diagram of the internal structure of an electronic device in an embodiment.
  • Fig. 1 is a schematic diagram of an application environment of a dial generation method in an embodiment.
  • the application environment includes a wearable device 102 and an electronic device 104, and the wearable device 102 and the electronic device 104 communicate through a network.
  • the electronic device 104 obtains the picture to be matched; extracts the feature to be matched of the picture to be matched; obtains the reference picture and the reference feature of the reference picture; respectively matches the feature to be matched with the reference feature of each reference picture, and determines the picture to be matched with each
  • the similarity between the reference pictures based on the similarity between the picture to be matched and each reference picture, a target picture is determined from each reference picture; the target picture is sent to the wearable device 102 via the network.
  • the wearable device 102 obtains the time element, and generates a dial based on the time element and the target picture.
  • Fig. 2 is a flowchart of a method for generating a dial in an embodiment. As shown in FIG. 2, the dial generation method includes operations 202 to 212.
  • the picture to be matched refers to the picture used for matching to generate the dial.
  • the picture to be matched can be one of RGB (Red, Green, Blue) pictures, grayscale pictures, and so on. RGB pictures can be taken with a color camera. Grayscale pictures can be taken by black and white cameras.
  • the picture to be matched may be stored locally by the electronic device, or stored by other devices, or stored on the network, or be taken by the electronic device in real time, but is not limited to this.
  • an ISP Image Signal Processing, image signal processing
  • a central processing unit of the electronic device may obtain the picture to be matched from a local or other device, or obtain the picture to be matched by shooting with a camera.
  • the feature to be matched of the picture to be matched is extracted.
  • the feature to be matched refers to the feature of the picture to be matched.
  • the feature to be matched may include at least one of a local feature and a global feature of the picture to be matched. Local features such as the texture feature and contour feature of the image to be matched; global features such as the color feature and contrast feature of the image to be matched.
  • the feature to be matched of the picture to be matched can be represented by a vector.
  • the electronic device inputs the picture to be matched into the feature extraction model, and extracts the features to be matched of the picture to be matched through the trained feature extraction model.
  • deep learning and metric learning are used to train the feature extraction model.
  • This deep learning uses Convolutional Neural Networks (CNN) for learning.
  • CNN Convolutional Neural Networks
  • Metric Learning is a space mapping method that can learn a feature (Embedding) space, in which all data is converted into a feature vector, and the feature vectors of similar samples The distance is small, and the distance between the feature vectors of dissimilar samples is large, so that the data can be distinguished.
  • the convolutional neural network in the feature extraction model is composed of multiple convolutional layers.
  • the shallow convolutional layer can extract the features of local details such as texture and contour in the image to be matched, and the high-level convolutional layer can extract color and contrast. Wait for the global abstract features, and finally the entire convolutional neural network embeds the image to be matched into a high-dimensional vector (usually 128-dimensional, 256-dimensional, 512-dimensional, etc.), and outputs the high-dimensional vector.
  • the high-dimensional vector is the feature to be matched of the picture to be matched.
  • the electronic device can also perform processing such as denoising and de-wrinkling the image to be matched, and then perform feature extraction on the processed image to be matched, so as to extract more accurate features to be matched.
  • the reference picture refers to the picture that is matched with the picture to be matched.
  • the reference feature refers to the feature of the reference picture.
  • the reference feature may also include at least one of a local feature and a global feature of the reference picture. Local features such as texture features and contour features of the reference picture; global features such as color features and contrast features of the reference picture.
  • the reference feature of the reference picture can be represented by a vector.
  • the electronic device may extract the reference feature from the reference picture in advance. In another embodiment, the electronic device may also extract the reference feature from the reference picture after acquiring the reference picture.
  • the electronic device inputs the reference picture into the feature extraction model, and the reference feature of the reference picture is extracted through the feature extraction model completed through training.
  • deep learning and metric learning are used to train the feature extraction model.
  • This deep learning uses Convolutional Neural Networks (CNN) for learning.
  • CNN Convolutional Neural Networks
  • Metric Learning is a space mapping method that can learn a feature (Embedding) space in which all data is converted into a feature vector, and the feature vectors of similar samples The distance is small, and the distance between the feature vectors of dissimilar samples is large, so that the data can be distinguished.
  • the convolutional neural network in the feature extraction model is composed of multiple convolutional layers.
  • the shallow convolutional layer can extract the features of local details such as texture and contour in the reference picture, and the high-level convolutional layer can extract color, contrast, etc.
  • Global abstract features and finally the entire convolutional neural network embeds the reference picture into a high-dimensional vector (usually 128-dimensional, 256-dimensional, 512-dimensional, etc.), and outputs the high-dimensional vector.
  • the high-dimensional vector is the reference feature of the reference picture.
  • the electronic device can also perform processing such as denoising and de-wrinkling on the reference picture, and then perform feature extraction on the processed reference picture, so as to extract more accurate reference features.
  • the feature to be matched is matched with the reference feature of each reference picture respectively, and the similarity between the picture to be matched and each reference picture is determined.
  • the electronic device calculates the cosine distance between the feature to be matched and the reference feature, and uses the cosine distance as the similarity between the picture to be matched and the reference picture.
  • cosine distance also called cosine similarity, uses the cosine value of the angle between two vectors in the vector space as a measure of the difference between two individuals.
  • a target picture is determined from each reference picture based on the similarity between the picture to be matched and each reference picture.
  • the number of determined target pictures may be one or at least two.
  • the electronic device may determine the reference picture with the highest similarity as the target picture. In another implementation manner, the electronic device may also determine the first two reference pictures with the highest similarity as the target picture. In other implementation manners, the electronic device may also determine other similarity reference pictures as the target picture.
  • the weight factor of each reference picture is obtained, and the target picture is determined from each reference picture based on the similarity between the picture to be matched and each reference picture, and the weight factor of each reference picture.
  • the similarity between the picture to be matched and the reference picture A is 60%
  • the similarity between the picture to be matched and the reference picture A is 85%
  • the weight factor of the reference picture A is 1.5
  • the weight factor of the reference picture B is 1.0.
  • the similarity of picture A is multiplied by the corresponding weighting factor of 1.5 to get 90%
  • the similarity of reference picture B is multiplied by the corresponding weighting factor of 1.0 to get 85%, which is then determined based on the respective values of reference picture A and reference picture B Target image.
  • the electronic device may select the reference picture A with a higher value as the target picture, or may select the reference picture B with a lower value as the target picture.
  • Operation 212 Obtain a time element, and generate a dial based on the time element and the target picture.
  • the time element refers to an element that includes time information.
  • Time elements can include time scale, hour hand, minute hand, second hand, etc.
  • the style of the time element is not limited.
  • the style of the time element is cartoon style, landscape style, object style, and so on.
  • the time information included in the time element can be running or static.
  • the time element can be a running clock, or a texture that includes a clock, and the clock in the texture is static.
  • the electronic device may use the target picture as the background picture, and use the time element as the foreground to perform superposition processing to generate a dial.
  • the above dial generation method matches the feature to be matched of the picture to be matched with the reference feature of the reference picture, so that a more accurate target picture can be determined from each reference picture based on the similarity between the picture to be matched and the reference picture, and then Get the time element, based on the time element and the target picture, a more accurate dial can be generated.
  • the pictures to be matched are landscapes, buildings, cars, etc. taken by electronic equipment, so that target pictures such as beautiful scenery, world-famous buildings, and famous cars are determined from the reference pictures, and various dials are generated.
  • the above method for determining the target picture may also be applied to solutions such as picture recommendation and shopping recommendation.
  • the above method further includes: determining the category of the picture to be matched based on the characteristics of the picture to be matched, determining a reference picture that matches the category of the picture to be matched, and matching the category of the picture to be matched
  • the reference picture is used as an intermediate picture; the feature to be matched is matched with the reference feature of each reference picture, and the similarity between the picture to be matched and each reference picture is determined, including: the feature to be matched is separated from the reference feature of each intermediate picture Perform matching to determine the similarity between the picture to be matched and each intermediate picture; determine the target picture from each reference picture based on the similarity between the picture to be matched and each reference picture, including: The similarity between an intermediate picture determines the target picture from each intermediate picture.
  • the scene in the picture to be matched and the objects included in the picture to be matched can be identified, so that the category of the picture to be matched can be determined.
  • the electronic device may pre-categorize the reference pictures, and then use the reference pictures whose categories are consistent with the categories of the pictures to be matched as intermediate pictures.
  • the intermediate pictures are selected from the reference pictures, and then the features to be matched are matched with the reference features of the intermediate pictures, which avoids matching the features to be matched with the reference features of all reference pictures, which can improve feature matching. s efficiency.
  • the method further includes:
  • Operation 302 Determine a to-be-matched area from the to-be-matched picture, and obtain a sub-picture according to the to-be-matched area.
  • the area to be matched refers to the area selected from the picture to be matched.
  • the shape of the area to be matched is not limited, and it can be a circle, a rectangle, a triangle, and an irregular figure.
  • the sub-picture refers to the picture generated according to the area to be matched.
  • the electronic device may use the area to be matched as a sub-picture.
  • the electronic device may obtain the sub-picture from the area to be matched. For example, if the area to be matched is an irregular shape, the largest rectangular area can be determined from the area to be matched as a sub-picture.
  • the specific implementation manner for obtaining the sub-picture according to the area to be matched is not limited, and can be set according to the needs of the user.
  • Extract the features to be matched of the image to be matched including:
  • the sub feature refers to the feature of the sub picture.
  • the sub-feature may include at least one of a local feature and a global feature of the sub-picture. Local features such as texture features and contour features of sub-pictures; global features such as color features and contrast features of sub-pictures.
  • the sub-feature of the sub-picture can be represented by a vector.
  • the electronic device inputs the sub-picture into the feature extraction model, and extracts the sub-features of the sub-picture through the trained feature extraction model.
  • deep learning and metric learning are used to train the feature extraction model.
  • This deep learning uses Convolutional Neural Networks (CNN) for learning.
  • CNN Convolutional Neural Networks
  • Metric Learning is a space mapping method that can learn a feature (Embedding) space in which all data is converted into a feature vector, and the feature vectors of similar samples The distance is small, and the distance between the feature vectors of dissimilar samples is large, so that the data can be distinguished.
  • the convolutional neural network in the feature extraction model is composed of multiple convolutional layers.
  • the shallow convolutional layer can extract the features of local details such as texture and contour in the sub-picture, and the high-level convolutional layer can extract color, contrast, etc.
  • Global abstract features and finally the entire convolutional neural network embeds the sub-picture into a high-dimensional vector (usually 128-dimensional, 256-dimensional, 512-dimensional, etc.), and outputs the high-dimensional vector.
  • the high-dimensional vector is the sub-feature of the sub-picture.
  • the electronic device can also perform processing such as denoising and de-wrinkling on the sub-picture, and then perform feature extraction on the processed sub-picture, so that more accurate sub-features can be extracted.
  • the feature to be matched is matched with the reference feature of each reference picture respectively, and the similarity between the picture to be matched and each reference picture is determined, including:
  • the sub-features are respectively matched with the reference features of each reference picture, and the similarity between the sub-picture and each reference picture is determined.
  • the electronic device calculates the cosine distance between the sub-feature and the reference feature, and uses the cosine distance as the similarity between the sub-picture and the reference picture.
  • cosine distance also known as cosine similarity, uses the cosine value of the angle between two vectors in the vector space as a measure of the difference between two individuals.
  • the target picture is determined from each reference picture, including:
  • a target picture is determined from each reference picture based on the similarity between the sub-picture and each reference picture.
  • the area to be matched is determined from the picture to be matched, the sub-picture is obtained according to the area to be matched, and the sub-feature of the sub-picture is matched with the reference feature of the reference picture, avoiding obtaining the features of all areas of the picture to be matched , It also avoids matching the features of all regions of the picture to be matched, saves the resources of the electronic device, thereby improves the efficiency of feature matching, and can more quickly determine the target picture.
  • extracting the target feature of the target region of the picture to be matched includes: obtaining the target scale; adjusting the size of the sub-picture to the target scale; and normalizing the pixel value of each pixel in the sub-picture of the target scale Processing: Perform feature extraction on the normalized sub-picture to obtain the target feature of the sub-picture.
  • the area to be matched is determined from the picture to be matched, and the sub-picture is obtained according to the area to be matched.
  • the scale of the sub-picture may be different from the scale of the reference picture. Therefore, the size of the sub-picture is adjusted to the target scale.
  • the target scale can be set according to user needs. When the target scale is larger than the original scale of the sub-picture, the sub-picture is enlarged; when the target scale is smaller than the original scale of the sub-picture, the sub-picture is reduced.
  • the size of the sub-picture is adjusted to the target scale (224 ⁇ 224 pixels).
  • Normalization refers to mapping data to the range of 0 to 1, which can be processed more conveniently and quickly. Specifically, the pixel value of each pixel in the sub-picture of the target scale is obtained, and the pixel value is mapped from 0-255 to a range of 0-1.
  • the size of the sub-picture is adjusted to the target scale; the pixel value of each pixel in the sub-picture of the target scale is normalized, which can facilitate subsequent processing of the normalized sub-picture.
  • the method before obtaining the reference picture and the reference feature of the reference picture, the method further includes: obtaining the reference picture; adjusting the size of the reference picture to the target scale; and returning the pixel value of each pixel in the reference picture of the target scale Unified processing; feature extraction of the reference picture after the normalization process to obtain the reference feature of the reference picture.
  • the reference picture and the sub-picture can perform feature matching under the same conditions, and the similarity between the sub-picture and the reference picture can be obtained more accurately, thereby Determine the target picture from the reference picture more accurately.
  • the pixel value of each pixel in the reference picture is normalized to facilitate subsequent processing of the reference picture.
  • determining the area to be matched from the picture to be matched includes:
  • a center weight map corresponding to the picture to be matched is generated, wherein the weight value represented by the center weight map gradually decreases from the center to the edge.
  • the central weight map refers to a map used to record the weight value of each pixel in the picture to be matched.
  • the weight value recorded in the center weight map gradually decreases from the center to the four sides, that is, the center weight is the largest, and the weight gradually decreases toward the four sides.
  • the weight value from the center pixel point of the picture to be matched to the edge pixel point of the picture is gradually reduced through the center weight map.
  • the ISP processor or the central processor can generate a corresponding center weight map according to the size of the picture to be matched.
  • the weight value represented by the center weight map gradually decreases from the center to the four sides.
  • the center weight map can be generated using a Gaussian function, or a first-order equation, or a second-order equation.
  • the Gaussian function may be a two-dimensional Gaussian function.
  • Operation 404 Input the image to be matched and the center weight map into the subject detection model to obtain a confidence map of the subject area, where the subject detection model is based on the image to be matched, the center weight map and the corresponding labeled subject of the same scene in advance The model obtained by training on the mask map.
  • the subject detection model is obtained by pre-collecting a large amount of training data, and inputting the training data into the subject detection model containing the initial network weights for training.
  • Each set of training data includes the image to be matched corresponding to the same scene, the center weight map, and the labeled subject mask map.
  • the image to be matched and the center weight map are used as the input of the trained subject detection model, and the labeled subject mask map is used as the ground truth that the trained subject detection model expects to output.
  • the subject mask map is an image filter template used to identify the subject in the picture. It can block other parts of the picture and filter out the subject in the picture.
  • the subject detection model can be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
  • the ISP processor or the central processing unit can input the to-be-matched image and the center weight map into the subject detection model, and the subject area confidence map can be obtained by performing the detection.
  • the subject area confidence map is used to record the probability of the subject which can be recognized. For example, the probability of a certain pixel belonging to a person is 0.8, the probability of a flower is 0.1, and the probability of a background is 0.1.
  • Operation 406 Determine the target subject in the picture to be matched according to the subject region confidence map, and use the area where the target subject is located as the region to be matched.
  • the ISP processor or the central processing unit can select the highest or second highest confidence level as the subject in the image to be matched according to the subject area confidence map. If there is one subject, the subject will be used as the target subject; if there are multiple subjects , You can select one or more subjects as the target subject according to your needs.
  • the picture to be matched and the center weight map are input into the corresponding subject detection model for detection, and the confidence map of the subject area can be obtained.
  • the confidence map of the subject area The target subject in the picture to be matched can be determined, the center weight map can make the object in the center of the image easier to be detected, and the trained subject detection model trained by the picture to be matched, the center weight map and the subject mask map are used. , The target subject in the picture to be matched can be identified more accurately, the area where the target subject is located is used as the area to be matched, and the area to be matched in the picture to be matched can be determined more accurately.
  • the above method further includes: dividing each reference picture into at least two reference categories; and determining the target picture from each reference picture based on the similarity between the picture to be matched and each reference picture, including: The similarity between the matched picture and each reference picture is determined, and the category of the picture to be matched is determined; the reference category that matches the category of the picture to be matched is used as the target category, and the target picture is determined from the reference pictures included in the target category .
  • the electronic device obtains the label of each reference picture, and classifies the reference pictures with the same label into the same reference category.
  • the label of reference picture A is "building”
  • the label of reference picture B is “flowers”
  • the label of reference picture C is “flowers”
  • the label of reference picture D is “building”
  • the label of reference picture E is “building” "
  • the reference picture A, the reference picture D, and the reference picture E are classified into the same reference category "building”
  • the reference picture B and the reference picture C are classified into the same reference category "flower”.
  • the electronic device may use the reference category corresponding to the reference picture with the highest similarity as the category of the picture to be matched. In another implementation manner, the electronic device may also obtain a preset number of reference pictures with the highest similarity, and use the reference category with the largest number among the preset number of reference pictures as the category of the picture to be matched. In other embodiments, the electronic device may also determine the category of the picture to be matched in other ways, which is not limited to this.
  • the target category refers to the reference category that matches the category of the picture to be matched.
  • the number of target pictures can be one or at least two.
  • the category of the picture to be matched is determined, and the reference category that matches the category of the picture to be matched is used as the target category, and the target picture is determined from the reference pictures included in the target category, avoiding all reference pictures Determining the target picture can not only improve the efficiency of determining the target picture, but also the accuracy of determining the target picture.
  • the electronic device obtains the reference picture 502; performs operation 504 to classify the reference picture 502, and divide the reference picture 502 into at least two reference categories.
  • the electronic device executes operation 506 to perform denoising and de-wrinkling on the classified reference pictures to obtain a picture library 508.
  • the electronic device performs operation 510 to perform deep learning and metric learning on the reference pictures in the picture library 508, and can obtain the reference features of each reference picture in the picture library, thereby generating a picture feature library 512.
  • execution process of 502 to 512 may be processed in advance, or may be processed during the dial generation process, and is not limited to this.
  • the electronic device obtains the picture to be matched 514; performs operation 516 to perform denoising and de-wrinkling on the picture to be matched 514; and then performs operation 518 to extract features from the picture to be matched after denoising and de-wrinkling to obtain the features to be matched.
  • the electronic device performs operation 520 to perform feature matching between the feature to be matched and the reference feature of the reference picture to obtain the similarity between the to-be-matched picture and each reference picture; and then based on the similarity between the to-be-matched picture and each reference picture , Determine the target picture 522 from each reference picture; obtain the time element, and generate the dial 524 based on the time element and the target picture.
  • the electronic device may determine the category of the picture to be matched based on the similarity between the picture to be matched and each reference picture, which will be the same as the picture to be matched.
  • the reference category that matches the category of is used as the target category, and the target picture 522 is determined from each reference picture included in the target category, which can improve the efficiency of determining the target picture.
  • obtaining the time element and generating the dial based on the time element and the target picture includes: obtaining the time element, generating each candidate dial based on the time element and each target picture determined in the target category, and displaying each candidate dial on the In the display interface; receiving a selection instruction for the candidate dial, and displaying the candidate dial selected by the selection instruction in the display interface to generate a dial.
  • a candidate dial is generated based on the time element and the target picture, and the electronic device can directly generate a dial from the candidate dial.
  • the number of determined target pictures is at least two
  • at least two candidate dials are generated based on the time element and the target picture, and at least two candidate dials are displayed on the display interface; when an instruction to select a candidate dial is received , The candidate dial selected by the selection command is displayed on the display interface to generate a dial.
  • each candidate dial is generated based on the time element and the target picture determined in the target category, and one of the candidate dials can be selected and displayed on the display interface, thereby generating the dial and improving the richness of the generated dial.
  • obtaining the time element and generating the dial based on the time element and the target picture includes: obtaining the category of the target picture; obtaining the corresponding target style based on the category of the target picture; obtaining the time element of the target style, based on the target picture and the target The time element of the style generates the dial.
  • At least one style corresponding to each category may be pre-stored.
  • the electronic device obtains the category of the target picture, it matches the category of the target picture with each of the stored categories, so as to obtain the target style corresponding to the category of the target picture.
  • Target styles such as cartoon style, landscape style, architectural style, etc.
  • the various styles of the category “Architecture” are obtained from the memory of the electronic device, such as the "Canton Tower” style, the “Window of the World” style, and the “Yellow Crane Tower” style. .
  • the time element of the target style corresponding to the category of the target picture is acquired, and the time element is more matched with the target picture and has a higher degree of fit, and a more accurate dial can be generated based on the time element of the target picture and the target style.
  • a dial generation method including: acquiring a picture to be matched; extracting features to be matched of the picture to be matched; acquiring reference pictures and reference features of the reference pictures; and comparing the features to be matched with each reference picture Match the reference features of each to determine the similarity between the picture to be matched and each reference picture; determine the target picture from each reference picture based on the similarity between the picture to be matched and each reference picture; send the target picture To the wearable device; the target picture is used to instruct the wearable device to acquire the time element, and generate a dial based on the time element and the target picture.
  • Wearable devices such as smart watches, smart bracelets, etc.
  • the time-consuming and workload-intensive tasks such as feature extraction and feature matching are performed in the electronic device, and then the final target image is sent to the wearable device.
  • the wearable device only needs to obtain the time element, and then based on Time elements and target pictures generate a dial to reduce the operating pressure of the wearable device, so that other functions of the wearable device can be better realized.
  • a dial generation method which is applied to a wearable device, and includes: acquiring a target picture sent by an electronic device; the target picture is based on the acquired picture to be matched and each reference picture acquired by the electronic device. The similarity between each is determined from each reference picture; the similarity between the picture to be matched and each reference picture is obtained by the electronic device matching the feature to be matched of the picture to be matched with the reference feature of each reference picture. ; The feature to be matched of the picture to be matched is extracted by the electronic device from the picture to be matched; the time element is obtained, and the dial is generated based on the time element and the target picture.
  • the process of generating the target image requires time-consuming and labor-intensive tasks such as feature extraction and feature matching, which are performed in the electronic device; and the wearable device receives the target image sent by the electronic device, and then obtains the time element.
  • the dial is generated based on the time element and the target picture, which reduces the operating pressure of the wearable device, so that other functions of the wearable device can be better realized.
  • FIGS. 2 to 4 may include multiple sub-operations or multiple stages. These sub-operations or stages are not necessarily executed at the same time, but can be executed at different times. These sub-operations or The execution order of the stages is not necessarily carried out sequentially, but may be executed alternately or alternately with at least a part of other operations or sub-operations or stages of other operations.
  • Fig. 6 is a structural block diagram of a dial generating device of an embodiment.
  • a dial generation device 600 including: a picture to be matched acquisition module 602, a feature extraction module 604, a reference picture and reference feature acquisition module 606, a matching module 608, a target picture determination module 610, and a dial generation Module 612, where:
  • the to-be-matched picture obtaining module 602 is used to obtain the to-be-matched picture.
  • the feature extraction module 604 is used to extract features to be matched of the picture to be matched.
  • the reference picture and reference feature acquisition module 606 is configured to acquire reference pictures and reference features of the reference pictures.
  • the matching module 608 is configured to respectively match the feature to be matched with the reference feature of each reference picture, and determine the similarity between the picture to be matched and each reference picture.
  • the target picture determining module 610 is configured to determine a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture.
  • the dial generation module 612 is used to obtain the time element, and generate the dial based on the time element and the target picture.
  • the above-mentioned dial generating device matches the feature to be matched of the picture to be matched with the reference feature of the reference picture, so that a more accurate target picture can be determined from each reference picture based on the similarity between the picture to be matched and the reference picture, and then Get the time element, based on the time element and the target picture, a more accurate dial can be generated.
  • the above-mentioned dial generating device 600 further includes an intermediate picture determining module, configured to determine the category of the picture to be matched based on the characteristics of the picture to be matched, determine the reference picture that matches the category of the picture to be matched, and The reference picture that matches the category of the picture to be matched is used as the intermediate picture; the above matching module 608 is also used to match the feature to be matched with the reference feature of each intermediate picture, respectively, to determine the similarity between the picture to be matched and each intermediate picture Degree; the above-mentioned target picture determination module 610 is also used to determine the target picture from each intermediate picture based on the similarity between the picture to be matched and each intermediate picture.
  • an intermediate picture determining module configured to determine the category of the picture to be matched based on the characteristics of the picture to be matched, determine the reference picture that matches the category of the picture to be matched, and The reference picture that matches the category of the picture to be matched is used as the intermediate picture; the above matching module 608 is also used to
  • the above-mentioned dial generating device 600 further includes a sub-picture obtaining module, which is used to determine the area to be matched from the picture to be matched, and obtain the sub-picture according to the area to be matched; the above-mentioned feature extraction module 604 is also used to extract information of the sub-picture Sub-features; the matching module 608 is also used to match the sub-features with the reference features of each reference picture to determine the similarity between the sub-pictures and each reference picture; the above-mentioned target picture determining module 610 is also used to determine the degree of similarity between the sub-pictures and each reference picture; The degree of similarity with each reference picture determines the target picture from each reference picture.
  • a sub-picture obtaining module which is used to determine the area to be matched from the picture to be matched, and obtain the sub-picture according to the area to be matched
  • the above-mentioned feature extraction module 604 is also used to extract information of the sub-picture Sub-
  • the feature extraction module 604 is also used to obtain the target scale; adjust the size of the sub-picture to the target scale; normalize the pixel value of each pixel in the sub-picture of the target scale; Feature extraction is performed on the sub-picture after the transformation process to obtain the target feature of the sub-picture.
  • the above-mentioned feature extraction module 604 is also used to obtain a reference picture; adjust the size of the reference picture to a target scale; normalize the pixel value of each pixel in the reference picture at the target scale; Feature extraction is performed on the reference picture after the transformation process to obtain the reference feature of the reference picture.
  • the aforementioned sub-picture acquisition module is also used to generate a center weight map corresponding to the picture to be matched, wherein the weight value represented by the center weight map gradually decreases from the center to the edge; the picture to be matched and the center weight
  • the image is input into the subject detection model to obtain a confidence map of the subject area, where the subject detection model is a model obtained by pre-training based on the image to be matched, the center weight map and the corresponding labeled subject mask map of the same scene; according to The subject area confidence map determines the target subject in the picture to be matched, and the area where the target subject is located is the area to be matched.
  • the above-mentioned dial generating device further includes a classification module, which is used to divide each reference picture into at least two reference categories; the above-mentioned target picture determination module 610 is also used to determine the similarity between the picture to be matched and each reference picture. Degree, determine the category of the picture to be matched; take the reference category that matches the category of the picture to be matched as the target category, and determine the target picture from each reference picture included in the target category.
  • a classification module which is used to divide each reference picture into at least two reference categories
  • the above-mentioned target picture determination module 610 is also used to determine the similarity between the picture to be matched and each reference picture. Degree, determine the category of the picture to be matched; take the reference category that matches the category of the picture to be matched as the target category, and determine the target picture from each reference picture included in the target category.
  • the above-mentioned dial generation module 612 is also used to obtain the time element, generate each candidate dial based on the time element and each target picture determined in the target category, and display each candidate dial on the display interface; receive a pair of candidates The selection command of the dial, and the candidate dial selected by the selection command is displayed in the display interface to generate a dial.
  • the above-mentioned dial generation module 612 is also used to obtain the category of the target picture; obtain the corresponding target style based on the category of the target picture; obtain the time element of the target style, and generate the dial based on the target picture and the time element of the target style.
  • Fig. 7 is a structural block diagram of a dial generating device of another embodiment.
  • a dial generation device 700 including: a picture to be matched acquisition module 702, a feature extraction module 704, a reference picture and reference feature acquisition module 706, a matching module 708, a target picture determination module 710, and a dial generation Module 712, where:
  • the to-be-matched picture obtaining module 702 is used to obtain the to-be-matched picture.
  • the feature extraction module 704 is used to extract features to be matched of the picture to be matched.
  • the reference picture and reference feature acquisition module 706 is used to acquire a reference picture and a reference feature of the reference picture.
  • the matching module 708 is configured to match the feature to be matched with the reference feature of each reference picture, respectively, and determine the similarity between the picture to be matched and each reference picture.
  • the target picture determining module 710 is configured to determine a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture.
  • the dial generating module 712 is used to send the target picture to the wearable device; the target picture is used to instruct the wearable device to obtain the time element, and to generate the dial based on the time element and the target picture.
  • the above-mentioned dial generating device matches the feature to be matched of the picture to be matched with the reference feature of the reference picture, so that a more accurate target picture can be determined from each reference picture based on the similarity between the picture to be matched and the reference picture.
  • the target image is sent to the wearable device for the wearable device to generate a more accurate dial.
  • Fig. 8 is a structural block diagram of a dial generating device of another embodiment.
  • a dial generating device 800 is provided, including: a target image acquisition module 802 and a dial generating module 804, wherein:
  • the target picture acquiring module 802 is used to acquire the target picture sent by the electronic device; the target picture is determined by the electronic device from each reference picture based on the similarity between the acquired picture to be matched and each acquired reference picture; The similarity between the picture and each reference picture is obtained by the electronic device matching the feature to be matched of the picture to be matched with the reference feature of each reference picture; the feature to be matched of the picture to be matched is obtained by the electronic device from the picture to be matched Extracted from.
  • the dial generation module 804 is used to obtain the time element, and generate the dial based on the time element and the target picture.
  • the above-mentioned dial generating device obtains a target picture sent by an electronic device.
  • the target picture is determined by the electronic device from each reference picture based on the similarity between the picture to be matched and the reference picture. Therefore, the determined target picture is more accurate; Time element, based on time element and target picture to generate a more accurate dial.
  • dial generating device may be divided into different modules as required to complete all or part of the functions of the above-mentioned dial generating device.
  • each module in the above-mentioned dial generating device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • Fig. 9 is a schematic diagram of the internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor and a memory connected via a system bus.
  • the processor is used to provide calculation and control capabilities to support the operation of the entire electronic device.
  • the memory may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by the processor to implement a dial generation method provided in the following embodiments.
  • the internal memory provides a cached operating environment for the operating system computer program in the non-volatile storage medium.
  • the electronic device can be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and a wearable device.
  • each module in the dial generating device may be in the form of a computer program.
  • the computer program can be run on an electronic device.
  • the program module composed of the computer program can be stored in the memory of the electronic device.
  • each module in the dial generating device may be in the form of a computer program.
  • the computer program can be run on a wearable device.
  • the program module composed of the computer program can be stored in the memory of the wearable device.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • a computer program product containing instructions that, when run on a computer, causes the computer to execute the dial generation method.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种表盘生成方法,包括:获取待匹配图片(202);提取待匹配图片的待匹配特征(204);获取参考图片,以及参考图片的参考特征(206);将待匹配特征与各个参考图片的参考特征分别进行匹配,确定待匹配图片与每一个参考图片之间的相似度(208);基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片(210);获取时间元素,基于时间元素和目标图片生成表盘(212)。

Description

表盘生成方法、装置、电子设备和计算机可读存储介质
相关申请的交叉引用
本申请要求于2020年06月04日提交中国专利局、申请号为202010499509.0、发明名称为“表盘生成方法、装置、电子设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种表盘生成方法、装置、电子设备和计算机可读存储介质。
背景技术
随着移动技术的发展,许多传统的电子产品也开始增加移动方面的功能,比如过去只能用来看时间的手表,现今也可以通过智能手机或家庭网络与互联网相连,显示来电信息、社交媒体聊天信息和新闻、天气信息等内容。
在智能手表或智能手环等电子设备的显示界面上,用户可以选择喜欢的图案作为表盘。然而,传统的表盘生成方法,存在生成的表盘不准确的问题。
发明内容
根据本申请的各种实施例提供了一种表盘生成方法、装置、电子设备、计算机可读存储介质。
一种表盘生成方法,包括:
获取待匹配图片;
提取所述待匹配图片的待匹配特征;
获取参考图片,以及所述参考图片的参考特征;
将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度;
基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片;
获取时间元素,基于所述时间元素和所述目标图片生成表盘。
一种表盘生成方法,包括:
获取待匹配图片;
提取所述待匹配图片的待匹配特征;
获取参考图片,以及所述参考图片的参考特征;
将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度;
基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片;
将所述目标图片发送至可穿戴设备;所述目标图片用于指示所述可穿戴设备获取时间元素,基于所述时间元素和所述目标图片生成表盘。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如上述的表盘生成方法的操作。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上述的方法的操作。
上述表盘生成方法、装置、电子设备和计算机可读存储介质,将待匹配图片的待匹配特征与参考图片的参考特征进行匹配,从而可以基于待匹配图片与参考图片之间的相似度,从 各个参考图片中确定更加准确的目标图片,再获取时间元素,基于时间元素和目标图片可以生成更加准确的表盘。
一种表盘生成方法,包括:
获取待匹配图片;
提取所述待匹配图片的待匹配特征;
获取参考图片,以及所述参考图片的参考特征;
将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度;
基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片;
将所述目标图片发送至可穿戴设备;所述目标图片用于指示所述可穿戴设备获取时间元素,基于所述时间元素和所述目标图片生成表盘。
一种表盘生成装置,包括:
待匹配图片获取模块,用于获取待匹配图片;
特征提取模块,用于提取所述待匹配图片的待匹配特征;
参考图片和参考特征获取模块,用于获取参考图片,以及所述参考图片的参考特征;
匹配模块,用于将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度;
目标图片确定模块,用于基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片;
表盘生成模块,用于将所述目标图片发送至可穿戴设备;所述目标图片用于指示所述可穿戴设备获取时间元素,基于所述时间元素和所述目标图片生成表盘。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如上述的表盘生成方法的操作。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上述的方法的操作。
上述表盘生成方法、装置、电子设备和计算机可读存储介质,将待匹配图片的待匹配特征与参考图片的参考特征进行匹配,从而可以基于待匹配图片与参考图片之间的相似度,从各个参考图片中确定更加准确的目标图片,将目标图片发送至可穿戴设备,用于可穿戴设备生成更加准确的表盘。
一种表盘生成方法,应用于可穿戴设备,包括:
获取电子设备发送的目标图片;所述目标图片是所述电子设备基于获取的待匹配图片与获取的每一个参考图片之间的相似度,从各个参考图片中确定的;所述待匹配图片与每一个所述参考图片之间的相似度,是所述电子设备将所述待匹配图片的待匹配特征与各个所述参考图片的参考特征分别进行匹配得到的;所述待匹配图片的待匹配特征是所述电子设备从所述待匹配图片中提取的;
获取时间元素,基于所述时间元素和所述目标图片生成表盘。
一种表盘生成装置,应用于可穿戴设备,包括:
目标图片获取模块,用于获取电子设备发送的目标图片;所述目标图片是所述电子设备基于获取的待匹配图片与获取的每一个参考图片之间的相似度,从各个参考图片中确定的;所述待匹配图片与每一个所述参考图片之间的相似度,是所述电子设备将所述待匹配图片的待匹配特征与各个所述参考图片的参考特征分别进行匹配得到的;所述待匹配图片的待匹配特征是所述电子设备从所述待匹配图片中提取的;
表盘生成模块,用于获取时间元素,基于所述时间元素和所述目标图片生成表盘。
一种可穿戴设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如上述的表盘生成方法的操作。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上述的方法的操作。
上述表盘生成方法、装置、可穿戴设备和计算机可读存储介质,获取电子设备发送的目标图片,该目标图片是电子设备基于待匹配图片与参考图片之间的相似度,从各个参考图片中确定的,因此确定的目标图片更加准确;再获取时间元素,基于时间元素和目标图片生成更加准确的表盘。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中表盘生成方法的应用环境图。
图2为一个实施例中表盘生成方法的流程图。
图3为一个实施例中操作确定目标图片的流程图。
图4为一个实施例中操作确定待匹配区域的流程图。
图5为另一个实施例中表盘生成方法的流程图。
图6为一个实施例中表盘生成装置的结构框图。
图7为另一个实施例中表盘生成装置的结构框图。
图8为另一个实施例中表盘生成装置的结构框图。
图9为一个实施例中电子设备的内部结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为一个实施例中表盘生成方法的应用环境示意图。如图1所示,该应用环境包括可穿戴设备102和电子设备104,可穿戴设备102和电子设备104通过网络进行通信。电子设备104获取待匹配图片;提取待匹配图片的待匹配特征;获取参考图片,以及参考图片的参考特征;将待匹配特征与各个参考图片的参考特征分别进行匹配,确定待匹配图片与每一个参考图片之间的相似度;基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片;将目标图片通过网络发送至可穿戴设备102。可穿戴设备102接收到目标图片后,获取时间元素,基于时间元素和目标图片生成表盘。
图2为一个实施例中表盘生成方法的流程图。如图2所示,表盘生成方法包括操作202至操作212。
操作202,获取待匹配图片。
待匹配图片指的是用于匹配从而生成表盘的图片。待匹配图片可以是RGB(Red、Green、Blue)图片、灰度图片等其中的一种。RGB图片可以通过彩色摄像头拍摄得到。灰度图片可以通过黑白摄像头拍摄得到。该待匹配图片可为电子设备本地存储的,也可为其他设备存储的,也可以为从网络上存储的,还可为电子设备实时拍摄的,不限于此。
具体地,电子设备的ISP(Image Signal Processing,图像信号处理)处理器或中央处理器可从本地或其他设备获取待匹配图片,或者通过摄像头拍摄得到待匹配图片。
操作204,提取待匹配图片的待匹配特征。
待匹配特征指的是待匹配图片的特征。待匹配特征可以包括待匹配图片的局部特征和全局特征中的至少一种。局部特征例如待匹配图片的纹理特征、轮廓特征等;全局特征例如待 匹配图片的颜色特征、对比度特征等。
可选地,待匹配图片的待匹配特征可以用向量进行表示。
电子设备将待匹配图片输入特征提取模型中,通过训练完成的特征提取模型提取待匹配图片的待匹配特征。其中,采用深度学习与度量学习训练该特征提取模型。该深度学习采用卷积神经网络(Convolutional Neural Networks,CNN)进行学习。度量学习(Metric Learning)是一种空间映射的方法,其能够学习到一种特征(Embedding)空间,在此空间中,所有的数据都被转换成一个特征向量,并且相似样本的特征向量之间距离小,不相似样本的特征向量之间距离大,从而对数据进行区分。
特征提取模型中的卷积神经网络由多个卷积层组合而成,浅层的卷积层可以提取待匹配图片中纹理、轮廓等局部细节的特征,高层的卷积层可以提取颜色、对比度等全局抽象的特征,最后整个卷积神经网络将待匹配图片嵌入(embedding)为高维向量(一般有128维,256维,512维等等),并将该高维向量输出。该高维向量即待匹配图片的待匹配特征。
进一步地,电子设备还可以对待匹配图片进行去噪、去褶皱等处理,再对处理后的待匹配图片进行特征提取,可以提取到更准确的待匹配特征。
操作206,获取参考图片,以及参考图片的参考特征。
参考图片指的是与待匹配图片进行匹配的图片。参考特征指的是参考图片的特征。同样地,参考特征也可以包括参考图片的局部特征和全局特征中的至少一种。局部特征例如参考图片的纹理特征、轮廓特征等;全局特征例如参考图片的颜色特征、对比度特征等。可选地,参考图片的参考特征可以用向量进行表示。
在一个实施例中,电子设备可以预先从参考图片中提取参考特征。在另一个实施例中,电子设备也可以在获取参考图片之后,从参考图片中提取参考特征。
电子设备将参考图片输入特征提取模型中,通过训练完成的特征提取模型提取参考图片的参考特征。其中,采用深度学习与度量学习训练该特征提取模型。该深度学习采用卷积神经网络(Convolutional Neural Networks,CNN)进行学习。度量学习(Metric Learning)是一种空间映射的方法,其能够学习到一种特征(Embedding)空间,在此空间中,所有的数据都被转换成一个特征向量,并且相似样本的特征向量之间距离小,不相似样本的特征向量之间距离大,从而对数据进行区分。
特征提取模型中的卷积神经网络由多个卷积层组合而成,浅层的卷积层可以提取参考图片中纹理、轮廓等局部细节的特征,高层的卷积层可以提取颜色、对比度等全局抽象的特征,最后整个卷积神经网络将参考图片嵌入(embedding)为高维向量(一般有128维,256维,512维等等),并将该高维向量输出。该高维向量即参考图片的参考特征。
进一步地,电子设备还可以对参考图片进行去噪、去褶皱等处理,再对处理后的参考图片进行特征提取,可以提取到更准确的参考特征。
操作208,将待匹配特征与各个参考图片的参考特征分别进行匹配,确定待匹配图片与每一个参考图片之间的相似度。
可以理解的是,相似的图片具有相近的特征表达。待匹配图片与参考图片的相似度越高,表示待匹配图片的待匹配特征与参考图片的参考特征越接近。
具体地,电子设备计算待匹配特征与参考特征之间的余弦距离,将该余弦距离作为待匹配图片与参考图片之间的相似度。其中,余弦距离,也称为余弦相似度,是用向量空间中两个向量夹角的余弦值作为衡量两个个体间差异的大小的度量。
操作210,基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片。
可选地,确定的目标图片的数量可以是一个,也可以是至少两个。
在一种实施方式中,电子设备可以将相似度最高的参考图片确定为目标图片。在另一种实施方式中,电子设备也可以将相似度最高的前两个参考图片确定为目标图片。在其他实施方式中,电子设备还可以将其他的相似度的参考图片确定为目标图片。
进一步地,获取各个参考图片的权重因子,基于待匹配图片与每一个参考图片之间的相似度,以及各个参考图片的权重因子,从各个参考图片中确定目标图片。
例如,待匹配图片与参考图片A的相似度是60%,待匹配图片与参考图片A的相似度是85%,参考图片A的权重因子是1.5,参考图片B的权重因子是1.0,将参考图片A的相似度乘以对应的权重因子1.5,得到90%,将参考图片B的相似度乘以对应的权重因子1.0,得到85%,再基于参考图片A和参考图片B分别得到的数值确定目标图片。电子设备可以选取数值更高的参考图片A作为目标图片,也可以选取数值低的参考图片B作为目标图片。
操作212,获取时间元素,基于时间元素和目标图片生成表盘。
时间元素指的是包括有时间信息的元素。时间元素可以包括时间刻度、时针、分针、秒针等。时间元素的样式并不限定,如时间元素的样式是卡通样式、风景样式、物品样式等等。时间元素中所包括的时间信息可以是运行的,也可以是静止的。例如,时间元素可以是运行的时钟,也可以是包括时钟的贴图,该贴图中的时钟是静止的。
具体地,电子设备可以将目标图片作为背景图片,将时间元素作为前景进行叠加处理,生成表盘。
上述表盘生成方法,将待匹配图片的待匹配特征与参考图片的参考特征进行匹配,从而可以基于待匹配图片与参考图片之间的相似度,从各个参考图片中确定更加准确的目标图片,再获取时间元素,基于时间元素和目标图片可以生成更加准确的表盘。
进一步地,通过不同的待匹配图片,可以从参考图片中确定不同的目标图片,从而生成各种不同的表盘,提高了表盘的丰富性。例如,待匹配图片为电子设备拍摄的风景、建筑、汽车等,从而从参考图片中确定美景、世界名建筑物、名车等目标图片,从而生成各种表盘。
在一个实施例中,上述确定目标图片的方法还可以应用于图片的推荐、购物推荐等方案中。
在一个实施例中,上述方法还包括:基于待匹配图片的待匹配特征确定待匹配图片的类别,确定与待匹配图片的类别相匹配的参考图片,并将与待匹配图片的类别相匹配的参考图片作为中间图片;将待匹配特征与各个参考图片的参考特征分别进行匹配,确定待匹配图片与每一个参考图片之间的相似度,包括:将待匹配特征与各个中间图片的参考特征分别进行匹配,确定待匹配图片与每一个中间图片之间的相似度;基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片,包括:基于待匹配图片与每一个中间图片之间的相似度,从各个中间图片中确定目标图片。
可以理解的是,基于待匹配图片的待匹配特征可以识别出待匹配图片中的场景,以及待匹配图片中所包括的物体等信息,从而可以判断出待匹配图片的类别。
电子设备可以预先将参考图片进行分类,再将类别与待匹配图片的类别一致的参考图片作为中间图片。
在本实施例中,从参考图片中筛选出中间图片,再将待匹配特征与中间图片的参考特征进行匹配,避免了将待匹配特征与所有参考图片的参考特征进行匹配,可以提高了特征匹配的效率。
在一个实施例中,如图3所示,获取待匹配图片之后,还包括:
操作302,从待匹配图片中确定待匹配区域,根据待匹配区域得到子图片。
待匹配区域指的是从待匹配图片中选取的区域。待匹配区域的形状并不限定,可以是圆形、矩形、三角形、以及不规则图形等。
子图片指的是根据待匹配区域生成的图片。在一种实施方式中,电子设备可以将待匹配区域作为子图片。在另一种实施方式中,电子设备可以从待匹配区域中获取子图片。例如,待匹配区域是不规则形状,可以从待匹配区域中确定最大的矩形区域作为子图片。根据待匹配区域得到子图片的具体实施方式并不限定,可以根据用户需要进行设定。
提取待匹配图片的待匹配特征,包括:
操作304,提取子图片的子特征。
子特征指的是子图片的特征。子特征可以包括子图片的局部特征和全局特征中的至少一种。局部特征例如子图片的纹理特征、轮廓特征等;全局特征例如子图片的颜色特征、对比度特征等。
可选地,子图片的子特征可以用向量进行表示。
电子设备将子图片输入特征提取模型中,通过训练完成的特征提取模型提取子图片的子特征。其中,采用深度学习与度量学习训练该特征提取模型。该深度学习采用卷积神经网络(Convolutional Neural Networks,CNN)进行学习。度量学习(Metric Learning)是一种空间映射的方法,其能够学习到一种特征(Embedding)空间,在此空间中,所有的数据都被转换成一个特征向量,并且相似样本的特征向量之间距离小,不相似样本的特征向量之间距离大,从而对数据进行区分。
特征提取模型中的卷积神经网络由多个卷积层组合而成,浅层的卷积层可以提取子图片中纹理、轮廓等局部细节的特征,高层的卷积层可以提取颜色、对比度等全局抽象的特征,最后整个卷积神经网络将子图片嵌入(embedding)为高维向量(一般有128维,256维,512维等等),并将该高维向量输出。该高维向量即子图片的子特征。
进一步地,电子设备还可以对子图片进行去噪、去褶皱等处理,再对处理后的子图片进行特征提取,可以提取到更准确的子特征。
将待匹配特征与各个参考图片的参考特征分别进行匹配,确定待匹配图片与每一个参考图片之间的相似度,包括:
操作306,将子特征与各个参考图片的参考特征分别进行匹配,确定子图片与每一个参考图片之间的相似度。
子图片与参考图片的相似度越高,表示子图片的子特征与参考图片的参考特征越接近。
具体地,电子设备计算子特征与参考特征之间的余弦距离,将该余弦距离作为子图片与参考图片之间的相似度。其中,余弦距离,也称为余弦相似度,是用向量空间中两个向量夹角的余弦值作为衡量两个个体间差异的大小的度量。
基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片,包括:
操作308,基于子图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片。
在本实施例中,从待匹配图片中确定待匹配区域,根据待匹配区域得到子图片,将子图片的子特征与参考图片的参考特征进行匹配,避免了获取待匹配图片的所有区域的特征,也避免了将待匹配图片的所有区域的特征进行匹配,节约了电子设备的资源,从而提高特征匹配的效率,可以更快速确定目标图片。
在一个实施例中,提取待匹配图片的目标区域的目标特征,包括:获取目标尺度;将子图片的大小调整至目标尺度;将目标尺度的子图片中各个像素点的像素值进行归一化处理;对归一化处理后的子图片进行特征提取,得到子图片的目标特征。
可以理解的是,从待匹配图片中确定的待匹配区域,再根据待匹配区域得到子图片,子图片的尺度大小与参考图片的尺度大小可能存在不同,因此,将子图片的大小调整至目标尺度。目标尺度可以根据用户需要进行设定。当目标尺度比子图片的原本尺度大时,则将子图片进行扩大;当目标尺度比子图片的原本尺度小时,则将子图片进行缩小。
例如,获取的目标尺度是(224×224像素),则将子图片的大小调整至目标尺度(224×224像素)。
归一化指的是将数据映射到0~1范围之内,可以更加便捷快速进行处理。具体地,获取目标尺度的子图片中的各个像素点的像素值,将像素值从0-255映射至0-1范围内。
在本实施例中,将子图片的大小调整至目标尺度;将目标尺度的子图片中各个像素点的像素值进行归一化处理,可以便于后续对归一化处理后的子图片进行处理。
在一个实施例中,获取参考图片,以及参考图片的参考特征之前,还包括:获取参考图 片;将参考图片的大小调整至目标尺度;将目标尺度的参考图片中各个像素点的像素值进行归一化处理;对归一化处理后的参考图片进行特征提取,得到参考图片的参考特征。
可以理解的是,将参考图片和子图片的大小均调整至目标尺度,则参考图片和子图片可以在相同条件下进行特征匹配,可以更加准确地得出子图片与参考图片之间的相似度,从而更加准确地从参考图片中确定目标图片。并且,将参考图片中的各个像素点的像素值进行归一化处理,便于后续对参考图片进行处理。
在一个实施例中,如图4所示,从待匹配图片中确定待匹配区域,包括:
操作402,生成与待匹配图片对应的中心权重图,其中,中心权重图所表示的权重值从中心到边缘逐渐减小。
其中,中心权重图是指用于记录待匹配图片中各个像素点的权重值的图。中心权重图中记录的权重值从中心向四边逐渐减小,即中心权重最大,向四边权重逐渐减小。通过中心权重图表征待匹配图片的图片中心像素点到图片边缘像素点的权重值逐渐减小。
ISP处理器或中央处理器可以根据待匹配图片的大小生成对应的中心权重图。该中心权重图所表示的权重值从中心向四边逐渐减小。中心权重图可采用高斯函数、或采用一阶方程、或二阶方程生成。该高斯函数可为二维高斯函数。
操作404,将待匹配图片和中心权重图输入到主体检测模型中,得到主体区域置信度图,其中,主体检测模型是预先根据同一场景的待匹配图片、中心权重图及对应的已标注的主体掩膜图进行训练得到的模型。
其中,主体检测模型是预先采集大量的训练数据,将训练数据输入到包含有初始网络权重的主体检测模型进行训练得到的。每组训练数据包括同一场景对应的待匹配图片、中心权重图及已标注的主体掩膜图。其中,待匹配图片和中心权重图作为训练的主体检测模型的输入,已标注的主体掩膜(mask)图作为训练的主体检测模型期望输出得到的真实值(ground truth)。主体掩膜图是用于识别图片中主体的图像滤镜模板,可以遮挡图片的其他部分,筛选出图片中的主体。主体检测模型可训练能够识别检测各种主体,如人、花、猫、狗、背景等。
具体地,ISP处理器或中央处理器可将该待匹配图片和中心权重图输入到主体检测模型中,进行检测可以得到主体区域置信度图。主体区域置信度图是用于记录主体属于哪种能识别的主体的概率,例如某个像素点属于人的概率是0.8,花的概率是0.1,背景的概率是0.1。
操作406,根据主体区域置信度图确定待匹配图片中的目标主体,将目标主体所在的区域作为待匹配区域。
具体地,ISP处理器或中央处理器可根据主体区域置信度图选取置信度最高或次高等作为待匹配图片中的主体,若存在一个主体,则将该主体作为目标主体;若存在多个主体,可根据需要选择其中一个或多个主体作为目标主体。
在本实施例中,生成与待匹配图片对应的中心权重图后,将待匹配图片和中心权重图输入到对应的主体检测模型中检测,可以得到主体区域置信度图,根据主体区域置信度图可以确定得到待匹配图片中的目标主体,利用中心权重图可以让图像中心的对象更容易被检测,利用训练好的利用待匹配图片、中心权重图和主体掩膜图等训练得到的主体检测模型,可以更加准确的识别出待匹配图片中的目标主体,将该目标主体所在区域作为待匹配区域,更准确地确定待匹配图片中的待匹配区域。
在一个实施例中,上述方法还包括:将各个参考图片分成至少两个参考类别;基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片,包括:基于待匹配图片与每一个参考图片之间的相似度,确定待匹配图片的类别;将与待匹配图片的类别相匹配的参考类别作为目标类别,并从目标类别所包括的各个参考图片中确定目标图片。
电子设备获取各个参考图片的标签,将同一标签的参考图片划分为同一参考类别。例如,参考图片A的标签是“建筑”,参考图片B的标签是“花朵”,参考图片C的标签是“花朵”,参考图片D的标签是“建筑”,参考图片E的标签是“建筑”,则将参考图片A、参 考图片D和参考图片E划分为同一参考类别“建筑”,将参考图片B和参考图片C划分为同一参考类别“花朵”。
可以理解的是,待匹配图片与参考图片之间的相似度越高,表示待匹配图片的待匹配特征与参考图片的参考特征越接近,也表示待匹配图片与参考图片的类别越接近。
在一种实施方式中,电子设备可以相似度最高的参考图片对应的参考类别作为待匹配图片的类别。在另一种实施方式中,电子设备也可以获取相似度最高的预设数量的参考图片,将预设数量的参考图片中数量最多的参考类别作为待匹配图片的类别。在其他实施方式中,电子设备还可以通过其他的方式确定待匹配图片的类别,不限于此。
目标类别指的是与待匹配图片的类别相匹配的参考类别。目标图片的数量可以是一个,也可以是至少两个。
在本实施例中,确定待匹配图片的类别,再将与待匹配图片的类别相匹配的参考类别作为目标类别,从目标类别所包括的参考图片中确定目标图片,避免了从所有参考图片中确定目标图片,既可以提高目标图片确定的效率,也可以提高目标图片确定的准确性。
在一个实施例中,如图5所示,电子设备获取参考图片502;执行操作504,对参考图片502进行分类处理,将参考图片502分成至少两个参考类别。电子设备执行操作506,对分类处理后的参考图片进行去噪去褶皱,得到图片库508。电子设备执行操作510,对图片库508中的参考图片进行深度学习和度量学习,可以获取到图片库中的各个参考图片的参考特征,从而生成图片特征库512。
需要指出的是,502至512的执行过程可以预先进行处理,也可以表盘生成过程中进行处理,不限于此。
电子设备获取待匹配图片514;执行操作516,对待匹配图片514进行去噪去褶皱;再执行操作518,对去噪去褶皱之后的待匹配图片提取特征,得到待匹配特征。电子设备执行操作520,将待匹配特征与参考图片的参考特征进行特征匹配,得到待匹配图片与每一个参考图片之间的相似度;再基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片522;获取时间元素,基于时间元素和目标图片生成表盘524。
进一步地,在得到待匹配图片与每一个参考图片之间的相似度之后,电子设备可以基于待匹配图片与每一个参考图片之间的相似度,确定待匹配图片的类别,将与待匹配图片的类别相匹配的参考类别作为目标类别,并从目标类别所包括的各个参考图片中确定目标图片522,可以提高确定目标图片的效率。
在一个实施例中,获取时间元素,基于时间元素和目标图片生成表盘,包括:获取时间元素,基于时间元素和目标类别中确定的各个目标图片分别生成各个候选表盘,并将各个候选表盘展示在显示界面中;接收对候选表盘的选取指令,并将选取指令选取的候选表盘展示在显示界面中,生成表盘。
当确定的目标图片的数量是一个时,则基于时间元素和该目标图片生成候选表盘,电子设备可以直接将该候选表盘生成表盘。当确定的目标图片的数量是至少两个时,则基于时间元素和该目标图片生成至少两个候选表盘,并将至少两个候选表盘展示在显示界面;当接收到对候选表盘的选取指令时,将选取指令选取的候选表盘展示在显示界面上,从而生成表盘。
在本实施例中,基于时间元素和目标类别中确定的目标图片生成各个候选表盘,可以从各个候选表盘中选取一个展示在显示界面上,从而生成表盘,提高生成的表盘的丰富性。
在一个实施例中,获取时间元素,基于时间元素和目标图片生成表盘,包括:获取目标图片的类别;基于目标图片的类别获取对应的目标样式;获取目标样式的时间元素,基于目标图片和目标样式的时间元素生成表盘。
在电子设备中,可以预先存储有各个类别对应的至少一种样式。当电子设备获取到目标图片的类别时,将目标图片的类别与存储的各个类别进行匹配,从而获取到目标图片的类别对应的目标样式。目标样式例如卡通样式、风景样式、建筑样式等等。
例如,目标图片的类别是“建筑”,则从电子设备的存储器中获取到类别为“建筑”的 各个样式,如“广州塔”样式、“世界之窗”样式、“黄鹤楼”样式等。
在本实施例中,获取目标图片的类别对应的目标样式的时间元素,该时间元素与目标图片更加匹配,契合度更高,基于目标图片和目标样式的时间元素可以生成更加准确的表盘。
在另一个实施例中,提供了一种表盘生成方法,包括:获取待匹配图片;提取待匹配图片的待匹配特征;获取参考图片,以及参考图片的参考特征;将待匹配特征与各个参考图片的参考特征分别进行匹配,确定待匹配图片与每一个参考图片之间的相似度;基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片;将目标图片发送至可穿戴设备;目标图片用于指示可穿戴设备获取时间元素,基于时间元素和目标图片生成表盘。
可穿戴设备例如智能手表、智能手环等。
可以理解的是,在电子设备中执行提取特征、特征匹配等耗时、工作量大的任务,再将最终确定的目标图片发送至可穿戴设备中,可穿戴设备仅需获取时间元素,再基于时间元素和目标图片生成表盘,减轻可穿戴设备的运行压力,从而可以更好地实现可穿戴设备的其他功能。
在另一个实施例中,提供了一种表盘生成方法,应用于可穿戴设备,包括:获取电子设备发送的目标图片;目标图片是电子设备基于获取的待匹配图片与获取的每一个参考图片之间的相似度,从各个参考图片中确定的;待匹配图片与每一个参考图片之间的相似度,是电子设备将待匹配图片的待匹配特征与各个参考图片的参考特征分别进行匹配得到的;待匹配图片的待匹配特征是电子设备从待匹配图片中提取的;获取时间元素,基于时间元素和目标图片生成表盘。
生成目标图片的过程需要执行提取特征、特征匹配等耗时、工作量大的任务,该任务在电子设备中执行;而可穿戴设备接收到电子设备发送的目标图片,再获取时间元素,即可基于时间元素和目标图片生成表盘,减轻了可穿戴设备的运行压力,从而可以更好地实现可穿戴设备的其他功能。
应该理解的是,虽然图2至图4的流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,图2至图4中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
图6为一个实施例的表盘生成装置的结构框图。如图6所示,提供了一种表盘生成装置600,包括:待匹配图片获取模块602、特征提取模块604、参考图片和参考特征获取模块606、匹配模块608、目标图片确定模块610和表盘生成模块612,其中:
待匹配图片获取模块602,用于获取待匹配图片。
特征提取模块604,用于提取待匹配图片的待匹配特征。
参考图片和参考特征获取模块606,用于获取参考图片,以及参考图片的参考特征。
匹配模块608,用于将待匹配特征与各个参考图片的参考特征分别进行匹配,确定待匹配图片与每一个参考图片之间的相似度。
目标图片确定模块610,用于基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片。
表盘生成模块612,用于获取时间元素,基于时间元素和目标图片生成表盘。
上述表盘生成装置,将待匹配图片的待匹配特征与参考图片的参考特征进行匹配,从而可以基于待匹配图片与参考图片之间的相似度,从各个参考图片中确定更加准确的目标图片,再获取时间元素,基于时间元素和目标图片可以生成更加准确的表盘。
在一个实施例中,上述表盘生成装置600还包括中间图片确定模块,用于基于待匹配图 片的待匹配特征确定待匹配图片的类别,确定与待匹配图片的类别相匹配的参考图片,并将与待匹配图片的类别相匹配的参考图片作为中间图片;上述匹配模块608还用于将待匹配特征与各个中间图片的参考特征分别进行匹配,确定待匹配图片与每一个中间图片之间的相似度;上述目标图片确定模块610还用于基于待匹配图片与每一个中间图片之间的相似度,从各个中间图片中确定目标图片。
在一个实施例中,上述表盘生成装置600还包括子图片获取模块,用于从待匹配图片中确定待匹配区域,根据待匹配区域得到子图片;上述特征提取模块604还用于提取子图片的子特征;上述匹配模块608还用于将子特征与各个参考图片的参考特征分别进行匹配,确定子图片与每一个参考图片之间的相似度;上述目标图片确定模块610还用于基于子图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片。
在一个实施例中,上述特征提取模块604还用于获取目标尺度;将子图片的大小调整至目标尺度;将目标尺度的子图片中各个像素点的像素值进行归一化处理;对归一化处理后的子图片进行特征提取,得到子图片的目标特征。
在一个实施例中,上述特征提取模块604还用于获取参考图片;将参考图片的大小调整至目标尺度;将目标尺度的参考图片中各个像素点的像素值进行归一化处理;对归一化处理后的参考图片进行特征提取,得到参考图片的参考特征。
在一个实施例中,上述子图片获取模块还用于生成与待匹配图片对应的中心权重图,其中,中心权重图所表示的权重值从中心到边缘逐渐减小;将待匹配图片和中心权重图输入到主体检测模型中,得到主体区域置信度图,其中,主体检测模型是预先根据同一场景的待匹配图片、中心权重图及对应的已标注的主体掩膜图进行训练得到的模型;根据主体区域置信度图确定待匹配图片中的目标主体,将目标主体所在的区域作为待匹配区域。
在一个实施例中,上述表盘生成装置还包括分类模块,用于将各个参考图片分成至少两个参考类别;上述目标图片确定模块610还用于基于待匹配图片与每一个参考图片之间的相似度,确定待匹配图片的类别;将与待匹配图片的类别相匹配的参考类别作为目标类别,并从目标类别所包括的各个参考图片中确定目标图片。
在一个实施例中,上述表盘生成模块612还用于获取时间元素,基于时间元素和目标类别中确定的各个目标图片分别生成各个候选表盘,并将各个候选表盘展示在显示界面中;接收对候选表盘的选取指令,并将选取指令选取的候选表盘展示在显示界面中,生成表盘。
在一个实施例中,上述表盘生成模块612还用于获取目标图片的类别;基于目标图片的类别获取对应的目标样式;获取目标样式的时间元素,基于目标图片和目标样式的时间元素生成表盘。
图7为另一个实施例的表盘生成装置的结构框图。如图7所示,提供了一种表盘生成装置700,包括:待匹配图片获取模块702、特征提取模块704、参考图片和参考特征获取模块706、匹配模块708、目标图片确定模块710和表盘生成模块712,其中:
待匹配图片获取模块702,用于获取待匹配图片。
特征提取模块704,用于提取待匹配图片的待匹配特征。
参考图片和参考特征获取模块706,用于获取参考图片,以及参考图片的参考特征。
匹配模块708,用于将待匹配特征与各个参考图片的参考特征分别进行匹配,确定待匹配图片与每一个参考图片之间的相似度。
目标图片确定模块710,用于基于待匹配图片与每一个参考图片之间的相似度,从各个参考图片中确定目标图片。
表盘生成模块712,用于将目标图片发送至可穿戴设备;目标图片用于指示可穿戴设备获取时间元素,基于时间元素和目标图片生成表盘。
上述表盘生成装置,将待匹配图片的待匹配特征与参考图片的参考特征进行匹配,从而可以基于待匹配图片与参考图片之间的相似度,从各个参考图片中确定更加准确的目标图片,将目标图片发送至可穿戴设备,用于可穿戴设备生成更加准确的表盘。
图8为另一个实施例的表盘生成装置的结构框图。如图8所示,提供了一种表盘生成装置800,包括:目标图片获取模块802和表盘生成模块804,其中:
目标图片获取模块802,用于获取电子设备发送的目标图片;目标图片是电子设备基于获取的待匹配图片与获取的每一个参考图片之间的相似度,从各个参考图片中确定的;待匹配图片与每一个参考图片之间的相似度,是电子设备将待匹配图片的待匹配特征与各个参考图片的参考特征分别进行匹配得到的;待匹配图片的待匹配特征是电子设备从待匹配图片中提取的。
表盘生成模块804,用于获取时间元素,基于时间元素和目标图片生成表盘。
上述表盘生成装置,获取电子设备发送的目标图片,该目标图片是电子设备基于待匹配图片与参考图片之间的相似度,从各个参考图片中确定的,因此确定的目标图片更加准确;再获取时间元素,基于时间元素和目标图片生成更加准确的表盘。
上述表盘生成装置中各个模块的划分仅仅用于举例说明,在其他实施例中,可将表盘生成装置按照需要划分为不同的模块,以完成上述表盘生成装置的全部或部分功能。
关于表盘生成装置的具体限定可以参见上文中对于表盘生成方法的限定,在此不再赘述。上述表盘生成装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图9为一个实施例中电子设备的内部结构示意图。如图9所示,该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种表盘生成方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。该电子设备可以是手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备等任意终端设备。
本申请实施例中提供的表盘生成装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在电子设备上运行。该计算机程序构成的程序模块可存储在电子设备的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。
本申请实施例中提供的表盘生成装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在可穿戴设备上运行。该计算机程序构成的程序模块可存储在可穿戴设备的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行表盘生成方法的操作。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行表盘生成方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型 SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (23)

  1. 一种表盘生成方法,其特征在于,包括:
    获取待匹配图片;
    提取所述待匹配图片的待匹配特征;
    获取参考图片,以及所述参考图片的参考特征;
    将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度;
    基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片;及
    获取时间元素,基于所述时间元素和所述目标图片生成表盘。
  2. 根据权利要求1所述的方法,其特征在于,所述获取待匹配图片之后,还包括:
    从所述待匹配图片中确定待匹配区域,根据所述待匹配区域得到子图片;
    所述提取所述待匹配图片的待匹配特征,包括:
    提取所述子图片的子特征;
    所述将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度,包括:
    将所述子特征与各个所述参考图片的参考特征分别进行匹配,确定所述子图片与每一个所述参考图片之间的相似度;及
    所述基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片,包括:
    基于所述子图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片。
  3. 根据权利要求2所述的方法,其特征在于,所述提取所述待匹配图片的目标区域的目标特征,包括:
    获取目标尺度;
    将所述子图片的大小调整至目标尺度;
    将所述目标尺度的子图片中各个像素点的像素值进行归一化处理;及
    对归一化处理后的子图片进行特征提取,得到所述子图片的目标特征。
  4. 根据权利要求3所述的方法,其特征在于,所述获取参考图片,以及所述参考图片的参考特征之前,还包括:
    获取参考图片;
    将所述参考图片的大小调整至目标尺度;
    将所述目标尺度的参考图片中各个像素点的像素值进行归一化处理;及
    对归一化处理后的参考图片进行特征提取,得到所述参考图片的参考特征。
  5. 根据权利要求2所述的方法,其特征在于,所述从所述待匹配图片中确定待匹配区域,包括:
    生成与所述待匹配图片对应的中心权重图,其中,所述中心权重图所表示的权重值从中心到边缘逐渐减小;
    将所述待匹配图片和所述中心权重图输入到主体检测模型中,得到主体区域置信度图,其中,所述主体检测模型是预先根据同一场景的待匹配图片、中心权重图及对应的已标注的主体掩膜图进行训练得到的模型;及
    根据所述主体区域置信度图确定所述待匹配图片中的目标主体,将所述目标主体所在的区域作为待匹配区域。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    将各个所述参考图片分成至少两个参考类别;
    所述基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中 确定目标图片,包括:
    基于所述待匹配图片与每一个所述参考图片之间的相似度,确定所述待匹配图片的类别;及
    将与所述待匹配图片的类别相匹配的参考类别作为目标类别,并从所述目标类别所包括的各个所述参考图片中确定目标图片。
  7. 根据权利要求6所述的方法,其特征在于,所述获取时间元素,基于所述时间元素和所述目标图片生成表盘,包括:
    获取时间元素,基于所述时间元素和所述目标类别中确定的各个所述目标图片分别生成各个候选表盘,并将各个所述候选表盘展示在显示界面中;及
    接收对所述候选表盘的选取指令,并将所述选取指令选取的候选表盘展示在所述显示界面中,生成表盘。
  8. 根据权利要求1所述的方法,其特征在于,所述获取时间元素,基于所述时间元素和所述目标图片生成表盘,包括:
    获取所述目标图片的类别;
    基于所述目标图片的类别获取对应的目标样式;及
    获取所述目标样式的时间元素,基于所述目标图片和所述目标样式的时间元素生成表盘。
  9. 一种表盘生成方法,其特征在于,包括:
    获取待匹配图片;
    提取所述待匹配图片的待匹配特征;
    获取参考图片,以及所述参考图片的参考特征;
    将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度;
    基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片;及
    将所述目标图片发送至可穿戴设备;所述目标图片用于指示所述可穿戴设备获取时间元素,基于所述时间元素和所述目标图片生成表盘。
  10. 一种表盘生成方法,其特征在于,应用于可穿戴设备,包括:
    获取电子设备发送的目标图片;所述目标图片是所述电子设备基于获取的待匹配图片与获取的每一个参考图片之间的相似度,从各个参考图片中确定的;所述待匹配图片与每一个所述参考图片之间的相似度,是所述电子设备将所述待匹配图片的待匹配特征与各个所述参考图片的参考特征分别进行匹配得到的;所述待匹配图片的待匹配特征是所述电子设备从所述待匹配图片中提取的;及
    获取时间元素,基于所述时间元素和所述目标图片生成表盘。
  11. 一种表盘生成装置,其特征在于,包括:
    待匹配图片获取模块,用于获取待匹配图片;
    特征提取模块,用于提取所述待匹配图片的待匹配特征;
    参考图片和参考特征获取模块,用于获取参考图片,以及所述参考图片的参考特征;
    匹配模块,用于将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度;
    目标图片确定模块,用于基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片;及
    表盘生成模块,用于获取时间元素,基于所述时间元素和所述目标图片生成表盘。
  12. 根据权利要求11所述的装置,其特征在于,所述装置还包括子图片获取模块;所述中间图片确定模块用于从所述待匹配图片中确定待匹配区域,根据所述待匹配区域得到子图片;
    所述特征提取模块还用于提取所述子图片的子特征;
    所述匹配模块还用于将所述子特征与各个所述参考图片的参考特征分别进行匹配,确定所述子图片与每一个所述参考图片之间的相似度;及
    所述目标图片确定模块还用于基于所述子图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片。
  13. 根据权利要求12所述的装置,其特征在于,所述特征提取模块还用于获取目标尺度;将所述子图片的大小调整至目标尺度;将所述目标尺度的子图片中各个像素点的像素值进行归一化处理;及对归一化处理后的子图片进行特征提取,得到所述子图片的目标特征。
  14. 根据权利要求13所述的装置,其特征在于,所述特征提取模块还用于获取参考图片;将所述参考图片的大小调整至目标尺度;将所述目标尺度的参考图片中各个像素点的像素值进行归一化处理;及对归一化处理后的参考图片进行特征提取,得到所述参考图片的参考特征。
  15. 根据权利要求12所述的装置,其特征在于,所述子图片获取模块还用于生成与所述待匹配图片对应的中心权重图,其中,所述中心权重图所表示的权重值从中心到边缘逐渐减小;将所述待匹配图片和所述中心权重图输入到主体检测模型中,得到主体区域置信度图,其中,所述主体检测模型是预先根据同一场景的待匹配图片、中心权重图及对应的已标注的主体掩膜图进行训练得到的模型;及根据所述主体区域置信度图确定所述待匹配图片中的目标主体,将所述目标主体所在的区域作为待匹配区域。
  16. 根据权利要求11所述的装置,其特征在于,所述装置还包括分类模块;所述分类模块还用于将各个所述参考图片分成至少两个参考类别;
    所述目标图片确定模块还用于基于所述待匹配图片与每一个所述参考图片之间的相似度,确定所述待匹配图片的类别;及将与所述待匹配图片的类别相匹配的参考类别作为目标类别,并从所述目标类别所包括的各个所述参考图片中确定目标图片。
  17. 根据权利要求16所述的装置,其特征在于,所述表盘生成模块还用于获取时间元素,基于所述时间元素和所述目标类别中确定的各个所述目标图片分别生成各个候选表盘,并将各个所述候选表盘展示在显示界面中;及接收对所述候选表盘的选取指令,并将所述选取指令选取的候选表盘展示在所述显示界面中,生成表盘。
  18. 根据权利要求11所述的装置,其特征在于,所述表盘生成模块还用于获取所述目标图片的类别;基于所述目标图片的类别获取对应的目标样式;及获取所述目标样式的时间元素,基于所述目标图片和所述目标样式的时间元素生成表盘。
  19. 一种表盘生成装置,其特征在于,包括:
    待匹配图片获取模块,用于获取待匹配图片;
    特征提取模块,用于提取所述待匹配图片的待匹配特征;
    参考图片和参考特征获取模块,用于获取参考图片,以及所述参考图片的参考特征;
    匹配模块,用于将所述待匹配特征与各个所述参考图片的参考特征分别进行匹配,确定所述待匹配图片与每一个所述参考图片之间的相似度;
    目标图片确定模块,用于基于所述待匹配图片与每一个所述参考图片之间的相似度,从各个所述参考图片中确定目标图片;及
    表盘生成模块,用于将所述目标图片发送至可穿戴设备;所述目标图片用于指示所述可穿戴设备获取时间元素,基于所述时间元素和所述目标图片生成表盘。
  20. 一种表盘生成装置,其特征在于,应用于可穿戴设备,包括:
    目标图片获取模块,用于获取电子设备发送的目标图片;所述目标图片是所述电子设备基于获取的待匹配图片与获取的每一个参考图片之间的相似度,从各个参考图片中确定的;所述待匹配图片与每一个所述参考图片之间的相似度,是所述电子设备将所述待匹配图片的待匹配特征与各个所述参考图片的参考特征分别进行匹配得到的;所述待匹配图片的待匹配特征是所述电子设备从所述待匹配图片中提取的;及
    表盘生成模块,用于获取时间元素,基于所述时间元素和所述目标图片生成表盘。
  21. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1至9中任一项所述的表盘生成方法的操作。
  22. 一种可穿戴设备,包括存储器及处理器,所述存储器中储存有计算机程序,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求10所述的表盘生成方法的操作。
  23. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的方法的操作。
PCT/CN2021/086409 2020-06-04 2021-04-12 表盘生成方法、装置、电子设备和计算机可读存储介质 WO2021244138A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010499509.0A CN113760415A (zh) 2020-06-04 2020-06-04 表盘生成方法、装置、电子设备和计算机可读存储介质
CN202010499509.0 2020-06-04

Publications (1)

Publication Number Publication Date
WO2021244138A1 true WO2021244138A1 (zh) 2021-12-09

Family

ID=78783573

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086409 WO2021244138A1 (zh) 2020-06-04 2021-04-12 表盘生成方法、装置、电子设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113760415A (zh)
WO (1) WO2021244138A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911792A (zh) * 2024-03-15 2024-04-19 垣矽技术(青岛)有限公司 一种电压基准源芯片生产用引脚检测系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855245A (zh) * 2011-06-28 2013-01-02 北京百度网讯科技有限公司 一种用于确定图片相似度的方法与设备
CN105045818A (zh) * 2015-06-26 2015-11-11 腾讯科技(深圳)有限公司 一种图片的推荐方法、装置和系统
CN105469376A (zh) * 2014-08-12 2016-04-06 腾讯科技(深圳)有限公司 确定图片相似度的方法和装置
CN109189544A (zh) * 2018-10-17 2019-01-11 三星电子(中国)研发中心 用于生成表盘的方法和装置
CN109189970A (zh) * 2018-09-20 2019-01-11 北京京东尚科信息技术有限公司 图片相似度比对方法和装置
CN109726664A (zh) * 2018-12-24 2019-05-07 出门问问信息科技有限公司 一种智能表盘推荐方法、系统、设备及存储介质
US10379721B1 (en) * 2016-11-28 2019-08-13 A9.Com, Inc. Interactive interfaces for generating annotation information
CN110569380A (zh) * 2019-09-16 2019-12-13 腾讯科技(深圳)有限公司 一种图像标签获取方法、装置及存储介质和服务器

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639858A (zh) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 基于目标区域匹配的图像检索方法
CN106354735A (zh) * 2015-07-22 2017-01-25 杭州海康威视数字技术股份有限公司 一种图像中目标的检索方法和装置
CN105678778B (zh) * 2016-01-13 2019-02-26 北京大学深圳研究生院 一种图像匹配方法和装置
CN106682698A (zh) * 2016-12-29 2017-05-17 成都数联铭品科技有限公司 基于模板匹配的ocr识别方法
CN108874889B (zh) * 2018-05-15 2021-01-12 中国科学院自动化研究所 基于目标体图像的目标体检索方法、系统及装置
CN110276767B (zh) * 2019-06-28 2021-08-31 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855245A (zh) * 2011-06-28 2013-01-02 北京百度网讯科技有限公司 一种用于确定图片相似度的方法与设备
CN105469376A (zh) * 2014-08-12 2016-04-06 腾讯科技(深圳)有限公司 确定图片相似度的方法和装置
CN105045818A (zh) * 2015-06-26 2015-11-11 腾讯科技(深圳)有限公司 一种图片的推荐方法、装置和系统
US10379721B1 (en) * 2016-11-28 2019-08-13 A9.Com, Inc. Interactive interfaces for generating annotation information
CN109189970A (zh) * 2018-09-20 2019-01-11 北京京东尚科信息技术有限公司 图片相似度比对方法和装置
CN109189544A (zh) * 2018-10-17 2019-01-11 三星电子(中国)研发中心 用于生成表盘的方法和装置
CN109726664A (zh) * 2018-12-24 2019-05-07 出门问问信息科技有限公司 一种智能表盘推荐方法、系统、设备及存储介质
CN110569380A (zh) * 2019-09-16 2019-12-13 腾讯科技(深圳)有限公司 一种图像标签获取方法、装置及存储介质和服务器

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911792A (zh) * 2024-03-15 2024-04-19 垣矽技术(青岛)有限公司 一种电压基准源芯片生产用引脚检测系统
CN117911792B (zh) * 2024-03-15 2024-06-04 垣矽技术(青岛)有限公司 一种电压基准源芯片生产用引脚检测系统

Also Published As

Publication number Publication date
CN113760415A (zh) 2021-12-07

Similar Documents

Publication Publication Date Title
CN111080628B (zh) 图像篡改检测方法、装置、计算机设备和存储介质
US10366313B2 (en) Activation layers for deep learning networks
US10726244B2 (en) Method and apparatus detecting a target
WO2022213879A1 (zh) 目标对象检测方法、装置、计算机设备和存储介质
CN106778928B (zh) 图像处理方法及装置
WO2019100724A1 (zh) 训练多标签分类模型的方法和装置
WO2020192483A1 (zh) 图像显示方法和设备
WO2020038205A1 (zh) 目标检测方法、装置、计算机可读存储介质及计算机设备
CN110648375B (zh) 基于参考信息的图像彩色化
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
WO2021036059A1 (zh) 图像转换模型训练方法、异质人脸识别方法、装置及设备
WO2018086607A1 (zh) 一种目标跟踪方法及电子设备、存储介质
JP2023545565A (ja) 画像検出方法、モデルトレーニング方法、画像検出装置、トレーニング装置、機器及びプログラム
US10489636B2 (en) Lip movement capturing method and device, and storage medium
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
WO2024001095A1 (zh) 面部表情识别方法、终端设备及存储介质
CN112651333B (zh) 静默活体检测方法、装置、终端设备和存储介质
WO2022002262A1 (zh) 基于计算机视觉的字符序列识别方法、装置、设备和介质
US11436804B2 (en) Augmented reality system
CN111428671A (zh) 人脸结构化信息识别方法、系统、装置及存储介质
WO2021244138A1 (zh) 表盘生成方法、装置、电子设备和计算机可读存储介质
CN112651410A (zh) 用于鉴别的模型的训练、鉴别方法、系统、设备及介质
US11605220B2 (en) Systems and methods for video surveillance
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
CN108288023B (zh) 人脸识别的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21816965

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21816965

Country of ref document: EP

Kind code of ref document: A1