CN110620877B - Position information generation method, device, terminal and computer readable storage medium - Google Patents

Position information generation method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN110620877B
CN110620877B CN201910975466.6A CN201910975466A CN110620877B CN 110620877 B CN110620877 B CN 110620877B CN 201910975466 A CN201910975466 A CN 201910975466A CN 110620877 B CN110620877 B CN 110620877B
Authority
CN
China
Prior art keywords
target
information
shot
persons
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910975466.6A
Other languages
Chinese (zh)
Other versions
CN110620877A (en
Inventor
鲁晋杰
李姫俊男
马标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910975466.6A priority Critical patent/CN110620877B/en
Publication of CN110620877A publication Critical patent/CN110620877A/en
Application granted granted Critical
Publication of CN110620877B publication Critical patent/CN110620877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The application belongs to the technical field of photographing, and particularly relates to a position information generation method, a position information generation device, a position information generation terminal and a computer-readable storage medium, wherein the method comprises the following steps: acquiring character information of a plurality of target characters to be shot; wherein the character information comprises one or more of name, sex, height, body width, age, position grade and clothes color information; outputting position information corresponding to the target people to be shot according to the person information of the target people to be shot; the technical problem of low shooting efficiency of the combination is solved.

Description

Position information generation method, device, terminal and computer readable storage medium
Technical Field
The present application belongs to the field of photographing technologies, and in particular, to a method, an apparatus, a terminal, and a computer-readable storage medium for generating location information.
Background
With the development of communication technology, the shooting function of the terminal is more and more popular, and people often use the terminal to shoot photos of group. For example, a terminal may be required to capture a group photo after a group event, during a trip, or after a meeting is completed.
However, when shooting a group photograph, the photographer often needs to repeatedly adjust the position of the subject, and there is a problem that the shooting efficiency is low.
Disclosure of Invention
The embodiment of the application provides a position information generation method, a position information generation device, a terminal and a computer readable storage medium, which can solve the technical problem of low photographing efficiency of a photo to a certain extent.
A first aspect of an embodiment of the present application provides a method for generating location information, including:
acquiring character information of a plurality of target characters to be shot; wherein the character information comprises one or more of name, sex, height, body width, age, position grade and clothes color information;
and outputting the position information corresponding to the plurality of target persons to be shot according to the person information of the plurality of target persons to be shot.
A second aspect of the embodiments of the present application provides a position information generating apparatus, including:
an acquisition unit configured to acquire person information of a plurality of target persons to be photographed; wherein the character information comprises one or more of name, sex, height, body width, age, position grade and clothes color information;
and the output unit is used for outputting the position information corresponding to the target people to be shot according to the person information of the target people to be shot.
A third aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
In the embodiment of the application, the person information of a plurality of target persons to be shot is obtained; according to the person information of the target persons to be shot, the position information corresponding to the target persons to be shot is output, so that the position information corresponding to the target persons to be shot is output according to the person information corresponding to each target person to be shot, the actual situation among the target persons to be shot can be met, and the shooting effect of co-shooting is optimized; in addition, when the photo is shot, a photographer can obtain a reasonable and attractive photo position only by adjusting the position of the target person to be shot once according to the output position information corresponding to the target person, the position of the target person to be shot does not need to be adjusted repeatedly, and the shooting efficiency of the photo is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a first implementation of a method for generating location information according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation process of training a preset personal information recognition model according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating an implementation of step 102 provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of a second implementation of a method for generating location information according to an embodiment of the present application;
FIG. 5 is a schematic illustration of a position map provided by an embodiment of the present application;
fig. 6 is a schematic flowchart of a third implementation of a method for generating location information according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an implementation flow for fusing location maps according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating the effect of fusing location maps provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a position information generating apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
With the development of communication technology, the shooting function of the terminal, such as shooting or short video shooting, is more and more popular, and particularly after the group building activity is finished, the user often needs to shoot a group photo by using the terminal during the traveling process or after the meeting is finished.
However, in the case of taking a group photograph, the position of the subject is often random, and the position of the photographed image is not reasonable, and therefore, in order to make the position of the subject more reasonable and beautiful, the position of the subject often needs to be repeatedly adjusted by the subject during the photographing of the group photograph, which causes a problem of low photographing efficiency.
Based on this, embodiments of the present application provide a position information generation method, apparatus, terminal, and computer-readable storage medium, which can solve the problem of low photographing efficiency of a group photo to a certain extent.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 shows a schematic implementation flow chart of a position information generating method provided in an embodiment of the present application, where the method is applied to a terminal, and can be executed by a position information generating device configured on the terminal, and is suitable for a situation where the shooting efficiency of a group photo needs to be improved. The terminal can be an intelligent terminal which can realize a photographing function and is used for mobile phones, computers, wearable equipment and the like. The position information generating method may include steps 101 to 102.
Step 101, obtaining the person information of a plurality of target persons to be photographed.
The personal information may include one or more of name, sex, height, body width, age, position grade and clothing color information.
Specifically, the name may be used to identify each of the target persons to be photographed; the height, the body width and the clothing color information can influence the coordination of the position and the composition, for example, the height, the body width and the clothing color information with large difference can influence the aesthetic degree of the photo when a target person stands together; age and job rating may affect the order of location, i.e., a target person who is older or high in job tends to stand in the middle. Therefore, the acquiring of the person information of the target persons to be photographed may include acquiring one or more of name, sex, height, body width, age, position grade and clothing color information of the target persons to be photographed, so that the position information corresponding to the target persons to be photographed, which is output according to the person information, can be matched with the actual situation between the target persons to be photographed, thereby achieving a better co-shooting effect.
In some embodiments of the present application, the acquiring of the person information of the plurality of target persons to be photographed may include: the method comprises the steps of obtaining a preview frame image collected by a camera, and identifying character information of a plurality of target characters to be shot in the preview frame image.
And identifying character information of the preview frame image acquired in real time to obtain character information of a plurality of target characters to be shot in the preview frame image.
The above identifying the character information of the plurality of target characters to be photographed in the preview frame image may include: and inputting the preview frame image into a preset character information recognition model, and outputting character information of a plurality of target characters to be shot in the preview frame image by the preset character information recognition model.
In some embodiments of the present application, before inputting the preview frame image into the preset personal information recognition model, the method may include: and training the figure information recognition model to be trained to obtain the preset figure information recognition model.
For example, as shown in fig. 2, training a to-be-trained personal information recognition model to obtain a preset personal information recognition model may include: step 201 to step 203.
Step 201, a plurality of second sample pictures are obtained.
The second sample pictures are all provided with the personal information of a plurality of second sample persons marked in advance.
In some embodiments of the present application, different second sample pictures may include a plurality of second sample persons having completely different personal information or partially different personal information. According to the method and the device, the second sample picture containing different pieces of character information is used for training the character information recognition model to be trained, so that the obtained preset character information recognition model can recognize character information of images containing various kinds of character information.
Step 202, inputting a target second sample picture of the second sample pictures into a to-be-trained personal information recognition model, and outputting personal information of a plurality of second sample pictures in the target second sample picture by the to-be-trained personal information recognition model.
In this embodiment, the target second sample picture refers to any one of the second sample pictures. In the embodiment of the application, the person information identification model to be trained is trained sequentially by using a large number of second sample pictures, so that the obtained preset person information identification model can identify the person information of the image containing various kinds of person information.
Step 203 of calculating a second similarity between the personal information of the second sample persons in the target second sample image output by the to-be-trained personal information identification model and the personal information of the second sample persons in the pre-labeled target second sample image, if the second similarity is smaller than a second similarity threshold, adjusting parameters of the to-be-trained personal information identification model, and training the to-be-trained personal information identification model by reusing the target second sample image until the second similarity is greater than or equal to the second similarity threshold, or training the to-be-trained personal information identification model by reusing the target second sample image when the training frequency of the to-be-trained personal information identification model is greater than or equal to the first similarity threshold, training the to-be-trained personal information identification model by using a next target second sample image in the second sample images, and obtaining the preset character information recognition model until the total training times of the character information recognition model to be trained is greater than or equal to a second time threshold value, or the change rate of the second similarity is smaller than a change rate threshold value.
For example, 100 second sample pictures are continuously obtained, any one of the 100 second sample pictures is input into the to-be-trained personal information recognition model, the to-be-trained personal information recognition model outputs the personal information of a plurality of second sample persons in the any one second sample picture, the second similarity is calculated, if the second similarity is larger than a second similarity threshold, the next second sample picture is used for training the to-be-trained personal information recognition model until the total training times of the to-be-trained personal information recognition model is larger than or equal to the second similarity threshold, or the change rate of the second similarity is smaller than the change rate threshold, and the preset personal information recognition model is obtained.
For the calculation of the second similarity between the personal information of the second sample persons in the second sample image output by the to-be-trained personal information identification model and the personal information of the second sample persons in the pre-labeled second sample image, reference may be made to the description in step 301 in this application, which is not repeated herein.
In the embodiment of the application, when the second similarity is greater than or equal to the second similarity threshold, or the training frequency of the second sample picture for training the personal information identification model to be trained is greater than or equal to the first frequency threshold, it indicates that the personal information identification model can accurately identify the personal information in the second sample image of the personal information identification model to be trained, and when the total training frequency of the personal information identification model to be trained is greater than or equal to the second frequency threshold, or the change rate of the second similarity is less than the change rate threshold, it indicates that the accuracy of the personal information output by the personal information identification model to be trained tends to be stable, and therefore, it can indicate that the personal information identification model to be trained has been trained.
It should be noted that, in some embodiments of the present application, the preset personal information identification model may be a Darknet model, and the personal information of a plurality of target persons to be captured in the preview frame image may be calculated according to a binary cross entropy loss function, so as to improve the identification accuracy of the personal information. For example, the Darknet model is the Darknet-53 model.
When the character information of a plurality of target characters to be shot in the preview frame image is identified by using the Darknet model, firstly, the position information of the plurality of target characters to be shot in the preview frame image needs to be identified, and then the character information of the plurality of target characters to be shot in the identified preview frame image is respectively detected.
When the position information of a plurality of target persons to be shot in the preview frame image is identified, the positions of the objects can be marked by rectangular bounding boxes in the image, the score of each rectangular bounding box is predicted by using logistic regression, and the position of the rectangular bounding box with the score larger than a score threshold value is used as the position of the target person; and selecting a loss function to calculate the plurality of target persons to be shot according to the person information of the plurality of target persons to be shot, which is required to be output, so as to obtain the person information of the plurality of target persons to be shot.
Specifically, when calculating two classification problems such as gender information of a plurality of target persons to be photographed, the normalization index function can be selected to calculate the person information of the plurality of target persons to be photographed, so as to obtain the person information of the plurality of target persons to be photographed; when calculating multi-classification problems such as height information of a plurality of target people to be shot, the binary cross entropy loss function can be selected to calculate the people information of the plurality of target people to be shot, and the people information of the plurality of target people to be shot is obtained.
It should be noted that, the above-mentioned training method for the preset personal information recognition model may refer to training methods for neural network models in other related technologies, which are not described herein again; in addition, in another embodiment of the present application, it is also possible to recognize person information of a plurality of target persons to be captured in the preview frame image by using a neural network model other than the Darknet model, for example, the convolutional neural network model CNN.
In some other embodiments of the present application, the acquiring of the person information of the plurality of target persons to be photographed may further include: the method includes the steps of obtaining a personal information list of a plurality of target persons to be photographed, and extracting personal information of the plurality of target persons to be photographed from the personal information list.
The personal information list is a list in which personal information of a plurality of target persons to be photographed is stored.
It should be noted that the personal information list may be obtained from a local storage space, or may be obtained from a cloud server, which is not limited in this application.
And 102, outputting position information corresponding to a plurality of target persons to be shot according to the person information of the plurality of target persons to be shot.
In the embodiment of the application, after the person information of a plurality of target persons to be photographed is acquired, the position information corresponding to the plurality of target persons to be photographed can be output according to the person information of the plurality of target persons to be photographed.
The output position information corresponding to the target people to be shot is more reasonable position information obtained according to the people information of the target people to be shot. The position information may include a position map expressed in a graphic form and position description information expressed in a text form. For example, the location description information may be "zhangsan stands in the middle of the third row, and lie four stands first from the left of the third row, … …".
In the embodiment of the application, the person information of a plurality of target persons to be shot is obtained; according to the person information of the target persons to be shot, the position information corresponding to the target persons to be shot is output, so that the position information corresponding to the target persons to be shot is output according to the person information corresponding to each target person to be shot, the actual situation among the target persons to be shot can be met, and the shooting effect of co-shooting is optimized; for example, in practical applications, it is possible to avoid a short target person standing next to a large target person by outputting position information corresponding to a plurality of target persons to be photographed according to the person information of the plurality of target persons to be photographed. In addition, when the photo is shot, a photographer can obtain a reasonable and attractive photo position only by adjusting the position of the target person to be shot once according to the output position information corresponding to the target person, the position of the target person to be shot does not need to be adjusted repeatedly, and the shooting efficiency of the photo is improved.
In order to obtain a reasonable and beautiful position for a group of people, as shown in fig. 3, in some embodiments of the present application, the outputting the position information corresponding to a plurality of target people to be photographed according to the person information of the plurality of target people to be photographed may specifically include: step 301 to step 302.
In step 301, a first similarity between the personal information of the target persons to be captured and the personal information of the first sample persons in the first sample image stored in advance is calculated.
The position information of a plurality of first sample persons in the first sample image stored in advance is position information which accords with the actual situation of each first sample person in the first sample image and has a more attractive position effect.
Specifically, the first similarity between the personal information of the target persons to be captured and the personal information of the first sample persons in the first sample image stored in advance may be calculated by the following formula:
Figure BDA0002230717840000091
wherein G isiCharacter information indicating a plurality of target characters to be photographed, GjPerson information S (G) indicating a plurality of first sample objects in a first sample image stored in advancei,Gj) A first similarity between the personal information representing a plurality of target persons to be photographed and the personal information representing a plurality of first sample persons in the first sample image stored in advance, m representing the total number of the target persons to be photographed, n representing the total number of the first sample persons in the first sample image, LpP-th personal object information, L, representing a target person to be photographedqQth personal object information s (L) indicating a first sample person in a first sample image stored in advancep,Lq) And a third similarity between the pth personal object information representing the target person to be photographed and the qth personal object information of the first sample person in the first sample image stored in advance.
Wherein a third similarity s (L) between the pth personal object information of the target person to be photographed and the qth personal object information of the first sample person in the pre-stored first sample imagep,Lq) This can be calculated by the following formula:
Figure BDA0002230717840000092
wherein the content of the first and second substances,
Figure BDA0002230717840000093
to
Figure BDA0002230717840000094
E kinds of personal information indicating one target person to be photographed, and e is an integer of 1 or more,
Figure BDA0002230717840000095
to
Figure BDA0002230717840000096
Representing e kinds of personal information respectively corresponding to the e kinds of personal information of a target person to be photographed in a pre-stored first sample image; k is a radical of1To k isnRepresenting the weight corresponding to each kind of personal information; a. the1To AnAnd the total number of the maximum values or value intervals corresponding to each type of information is represented. And when A is1To AnWhen the total number of the value intervals corresponding to each kind of information is expressed,
Figure BDA0002230717840000097
to
Figure BDA0002230717840000098
And e kinds of person information which represent a target person to be shot are positioned in the several value intervals. Accordingly, the method can be used for solving the problems that,
Figure BDA0002230717840000099
to
Figure BDA00022307178400000910
And respectively positioning the e types of personal information which respectively correspond to the e types of personal information of a target person to be shot in the first prestored sample image in the several value-taking intervals.
For example, when the height is divided into one height section every 5cm from 100cm to 200cm, the height information corresponds to a value of (200-.
It should be noted that, in the following description,
Figure BDA0002230717840000101
the image of the first sample person in the first sample image is stored in advance
Figure BDA0002230717840000102
Corresponding information, and may be information that plays an important role in calculating a third similarity between the personal information of one target person to be photographed and the personal information of one first sample person in the first sample image stored in advance, such as sex information, body width information, or position rank information. If the gender, the body width or the position grade of a target person to be photographed are different from those of a first sample person in a pre-stored first sample image, the third similarity between the personal information of the target person to be photographed and the personal information of the first sample person in the pre-stored first sample image is 0.
In some embodiments of the present application, k is1To k isnCan be adjusted according to the actual situation, for example, when the importance degree of the age information in the position of the group is greater than that of other information, such as: the position of the family photo can enable the weight of the age information to be larger than that of other person information; for another example, when the importance degree of the position ranking information in the photo position is greater than that of other information, for example, the meeting of a work conference, the weight of the position ranking information may be made greater than that of other personal information.
Step 302, using the first sample image corresponding to the first similarity larger than the first similarity threshold as the target sample image, and outputting the position information of the plurality of target persons to be photographed according to the position information of each sample person in the target sample image.
In the embodiment of the present application, since the first sample image corresponding to the first similarity greater than the first similarity threshold is the first sample image having the greatest similarity with the personal information of the target person to be photographed, the first sample image corresponding to the first similarity greater than the first similarity threshold may be used as the target sample image.
In the embodiment of the application, the multiple kinds of character information of the multiple target characters to be shot is calculated one by one with the multiple kinds of character information of the multiple first sample characters in the pre-stored first sample image, and the first similarity is obtained in a weighting mode, so that the calculation precision of the similarity is effectively improved, and the position information of the multiple first sample characters of the target sample image determined from the pre-stored first sample image can be consistent with the multiple target characters to be shot.
It should be noted that, the above-mentioned calculating the first similarity between the personal information of the target persons to be shot and the personal information of the first sample persons in the first sample image stored in advance may also adopt other calculation methods of the similarity of the related images, and details are not described here.
In some other implementation manners of the present application, when the position information is a position map, as shown in fig. 4, after the outputting the position information corresponding to the plurality of target persons to be photographed, the method may further include: step 401 to step 404.
Step 401, displaying a position map corresponding to a plurality of target persons to be photographed, and receiving a selection operation or a deselection operation of the position map.
For example, after the position maps of the plurality of target persons to be photographed are obtained by the position information generating method shown in fig. 3, the output position map corresponding to each of the plurality of target persons to be photographed may be displayed.
In the embodiment of the application, after the position maps corresponding to a plurality of target persons to be photographed are displayed, the selection operation or the cancellation operation of the position maps can be received. When the terminal receives the selection operation of the position map, the position map corresponding to the selection operation of the position map is represented to meet the requirement of the user on the co-shooting position; when the terminal receives the operation of deselecting the position map, the terminal indicates that all the displayed position maps do not meet the requirements of the user on the reference position, and at the moment, the position map needs to be output again.
For example, the position map is output again in the manner of step 402 to step 404.
Step 402, if the received operation of deselecting the position map is received, grouping the multiple target persons to be photographed according to the person information of the multiple target persons to be photographed, and determining the target persons corresponding to each group and the number of persons in each group.
In the embodiment of the present application, in the process of grouping a plurality of target persons to be photographed according to the personal information of the plurality of target persons to be photographed, each target person in the same group has the same or similar personal information.
For example, if the person information of a plurality of target persons to be photographed includes sex information and height information, the sex information is divided into male and female, and the height information is divided into two height ranges of 165 cm or more and 165 cm or less, 4 groups can be obtained.
For example, as shown in fig. 5, each of the boxes in the preview frame image a represents a target person to be photographed, wherein the dotted boxes represent a male target person, the solid boxes represent a female target person, and the length of the boxes in the vertical direction represents the height of the target person, so that the target person 51 can be divided into a male group having a height of 165 cm or less, the target persons 52, 53, 54, 55 can be divided into a female group having a height of 165 cm or less, the target persons 56, 57 can be divided into a female group having a height of 165 cm or more, and the target persons 58, 59, 510, 511 can be divided into a male group having a height of 165 cm or more.
Step 403, obtaining the total number of the multiple target persons to be photographed, and determining the total number of the rows of the positions of the multiple target persons to be photographed according to the total number of the persons.
The total number of lines of positions may include one or more total number of lines of positions, and may be determined according to the total number of persons of the target person. For example, when the total number of people is less than or equal to a first total number of people threshold (e.g., 20), the total number of rows of positions may be one or two; when the total number of persons is less than or equal to the second total number of persons threshold (e.g., 30), the total number of rows of positions may be two or three.
Step 404, outputting a position map corresponding to a plurality of target persons to be photographed according to the person information of the target person corresponding to each group, the number of persons in each group and the total number of rows of positions.
Specifically, the position rank number of each group may be determined according to the personal information of the target person of each group and the total position rank number; then, the position order of the target person in each group in the corresponding position row number is determined according to the person number of each group in each row, for example, the middle of each row of the target person station in the group with the least number of persons in each row can be determined according to the person number of each group in each row, and the positions of the target person in each group in each row are sequentially arranged on the two sides of the target person in the group with the least number of persons in each row according to the sequence from the number of persons in each row to the number of persons in each row; alternatively, the center of each row of the target person station in the group with the largest number of persons in each row may be determined based on the number of persons in each group, and the positions of the target persons in each group may be set on both sides of the target person in the group with the largest number of persons in each row in at least order of the number of persons in each row.
For example, since the total number of persons of interest in the preview frame image a shown in fig. 5 is 11, the total number of rows of positions may be one or two, that is, when the number of rows of positions per group is determined based on the personal information of the persons of interest per group and the total number of rows of positions, the persons of interest in 4 groups may all stand in the same row, or the persons of interest in 4 groups may all stand in two rows. For example, two groups of the size less than 165 cm are stood in the first row, two groups of the size greater than 165 cm are stood in the second row, then, in each row, the position of the target person in each group can be arranged on the two sides of the target person in the group with the minimum number of persons according to the sequence of the number of the grouped persons from small to large, and a position map b and a position map c corresponding to 11 target persons in the preview frame image a can be obtained; or, in each row, the station with the largest number of grouped people is arranged in the middle, and the positions of the target people of each group are arranged on the two sides of the target people in the group with the largest number of grouped people in sequence from at least to a few, so that the position map d and the position map e corresponding to the 11 target people in the preview frame image a are obtained.
In the output position map corresponding to a plurality of target persons to be photographed, the positions of the target persons in the same group may be exchanged with each other.
For example, the positions of the target person 58 and the target person 59 in the position diagram b shown in fig. 5 may be reversed.
In the embodiment of the application, after the deselection operation of the position map is received, that is, when the user is unsatisfied with the position map displayed by the terminal, the plurality of target persons to be photographed may be grouped according to the person information of the plurality of target persons to be photographed, and the position maps corresponding to the plurality of target persons to be photographed may be output according to the target persons corresponding to each group, the number of persons in each group, and the number of positions, so as to obtain the position map more conforming to the actual situation of the target persons to be photographed, for the user to select.
In some other embodiments of the present application, as shown in fig. 6, after outputting the position information corresponding to the plurality of target persons to be photographed, the method may further include: step 601 to step 603.
Step 601, displaying position maps corresponding to a plurality of target people to be shot, and receiving selection operation or deselection operation of the position maps.
The specific implementation of step 601 may refer to the description of step 401, and is not described herein again.
Step 602, if a deselection operation for the location map is received, loading movable person label controls corresponding to a plurality of target persons to be photographed on the location map generation interface according to the person information of the plurality of target persons to be photographed.
The movable character label controls corresponding to the target characters to be shot are generated according to the character information of the target characters to be shot.
For example, the movable character label control is a movable label control labeled with a character name, or alternatively, the movable character label control is a movable label control with different widths and heights scaled according to a ratio, and the width of the movable label control represents the body width (for example, the width between shoulders or the width between arms) of the target character to be photographed, and the height of the movable label control represents the height of the target character to be photographed.
Step 603, receiving the moving operation of the movable character label control, and outputting the position maps corresponding to the plurality of target characters to be shot according to the position information of the movable character label control when receiving the position map generating instruction.
In the embodiment of the application, the movable character label controls corresponding to the plurality of target characters to be photographed can be moved to positions on the display interface of the terminal, which are considered reasonable by a user, according to the movement operation of the user on the terminal. After the user finishes the moving operation of all the movable character label controls, the position images corresponding to a plurality of target characters to be shot are output according to the position information of the movable character label controls through triggering the position information generation instruction, and the output position images of the target characters to be shot are position images which are set individually by the user.
In the embodiment of the application, after the operation of canceling the selection of the position map is received, that is, when the user is not satisfied with the position map displayed by the terminal, the movable character tag controls corresponding to the plurality of target characters to be photographed may be loaded on the position information generation interface according to the character information of the plurality of target characters to be photographed, then the movement operation of the movable character tag controls may be received, and when the position information generation instruction is received, the position maps corresponding to the plurality of target characters to be photographed may be output according to the position information of the movable character tag controls, so that the output position maps corresponding to the plurality of target characters to be photographed are the position maps customized by the user and meet the user requirements.
In some other embodiments of the present application, as shown in fig. 7, step 401 or step 601 may further include: step 701 to step 703.
Step 701, a scene image of a shooting scene is acquired.
The scene image of the shooting scene may be one or more scene images of the shooting scene. Moreover, the scene image of the shooting scene may be an image acquired by the camera from different shooting angles or different shooting positions.
For example, the scene images B and C shown in fig. 8 are scene images acquired by the camera from different shooting positions respectively.
Step 702, fusing the position maps corresponding to the multiple target characters to be shot with the scene images of the shooting scene to obtain fused position maps.
That is, the scene images of the plurality of shooting scenes may be respectively fused with the position maps corresponding to the plurality of target persons to be shot, so as to obtain a plurality of fused position maps.
Specifically, the fusing the position maps corresponding to the multiple target persons to be photographed with the scene image of the photographing scene to obtain fused position maps corresponding to the multiple target persons to be photographed may include: carrying out target identification on the scene image to obtain a target object in the scene image; and embedding the position maps corresponding to a plurality of target characters to be shot into the scene image according to the shape, the contour size and the position information of the target object to obtain a fused position map.
The target object in the scene image may include objects such as buildings, animals, and plants in the scene image.
Specifically, as shown in fig. 8, in the position diagram a corresponding to the target persons 81, 82, 83, 84, 85 to be photographed, the positional relationships among the target persons 81, 82, 83, 84, 85 are arranged in a row from left to right. When the position maps a corresponding to a plurality of target characters to be shot are fused with the scene image of the shooting scene to obtain fused position maps corresponding to the plurality of target characters to be shot, the scene image B of the shooting scene can be obtained first, then target recognition is performed on the scene image B to obtain a target object big tree 86 and a stone 87 in the scene image B, and then the position map a can be embedded in front of the big tree 86 according to the shape, contour size information and position information of the target object big tree 86 and the stone 87 in the scene image B to obtain a fused position map D; or the position map A can be divided and embedded into the left side and the right side of the big tree 86 to obtain a fused position map E; for another example, a scene image C of a shot scene is acquired, target recognition is performed on the scene image C to obtain a target object tree 86 and a stone 87 in the scene image C, and then the position map a is embedded in front of the target object stone 87 according to the shape, contour size information and position information of the target object tree 86 and the stone 87 in the scene image C to obtain a fused position map F; the map a may be divided and embedded in the left and right sides of the stone 87 to obtain the fused map g.
In some other embodiments of the present application, the fusing the position maps corresponding to the multiple target persons to be captured with the scene image of the captured scene to obtain fused position maps corresponding to the multiple target persons to be captured may further include: the method comprises the steps of obtaining the lower edge of a scene image of a shooting scene, embedding position diagrams corresponding to a plurality of target characters to be shot above the lower edge of the scene image of the shooting scene, and obtaining fused position diagrams corresponding to the plurality of target characters to be shot.
It should be noted that other image fusion methods are also applicable to the present application, and only the position maps corresponding to the multiple target persons to be photographed and the scene image of the photographing scene need to be fused to obtain the fused position maps corresponding to the multiple target persons to be photographed.
And 703, displaying the fused position diagram, and receiving the selection operation or the deselection operation of the fused position diagram.
In the embodiment of the application, the position maps corresponding to the target characters to be shot are fused with the scene images of the shot scene in different fusion modes to obtain a plurality of different fused position maps, so that a user can select the position maps which meet the requirement of harmony between people and can meet the requirement of harmony between people and scenes from the fused position maps, and the shooting effect of the harmony is further optimized.
After the step 102, or after the position information of the target persons to be photographed is obtained by the position information generating method shown in fig. 3, and the position maps corresponding to the target persons to be photographed are generated according to the position information of the target persons to be photographed, or after the step 403, or after the step 603, the position information corresponding to the target persons to be photographed may be sent to the target persons through the internet or the local area network, so that the target persons to be photographed may adjust their positions according to the received position information corresponding to the target persons to be photographed, thereby further improving the photographing efficiency.
In the embodiment of the application, after the position information generation instruction is received, the position map corresponding to the position information generation instruction may be stored as the first sample image, so that the position map generated by using the position information generation method shown in fig. 3 can meet personalized requirements of different users.
It should be noted that for simplicity of description, the aforementioned method embodiments are all presented as a series of combinations of acts, but those skilled in the art will appreciate that the present invention is not limited by the order of acts described, as some steps may occur in other orders in accordance with the present invention.
For example, in some embodiments of the present application, the step 602 of loading the movable person tag controls corresponding to the multiple target persons to be photographed in the position information generating interface according to the personal information of the multiple target persons to be photographed, and the step 603 may also be performed after the step 403.
That is, after outputting the position maps corresponding to the plurality of target persons to be photographed according to the target person corresponding to each group, the number of persons in each group, and the number of position ranks, displaying the target person corresponding to each group, the number of persons in each group, and the number of position ranks, outputting the position maps corresponding to the plurality of target persons to be photographed, and receiving a selection operation or a deselection operation for the position maps; if the received operation of canceling the selection of the position map is performed, loading movable character label controls corresponding to the target characters to be shot on a position information generation interface according to the character information of the target characters to be shot; and receiving the moving operation of the movable character label control, and outputting the position maps corresponding to the plurality of target characters to be shot according to the position information of the movable character label control when receiving a position information generation instruction, thereby realizing the personalized customization of the position maps.
Fig. 9 shows a schematic structural diagram of a position information generating apparatus 900 provided in an embodiment of the present application, and includes an obtaining unit 901 and an output unit 902.
An acquisition unit 901 configured to acquire person information of a plurality of target persons to be photographed; wherein the character information comprises one or more of name, sex, height, body width, age, position grade and clothes color information;
an output unit 902, configured to output, according to the person information of the multiple target persons to be photographed, position information corresponding to the multiple target persons to be photographed.
In some embodiments of the present application, the output unit 902 is further configured to calculate a first similarity between the personal information of the target persons to be captured and the personal information of the first sample persons in the first sample image stored in advance; and taking a first sample image corresponding to a first similarity larger than a first similarity threshold value as a target sample image, and outputting the position information of the plurality of target persons to be shot according to the position information of each sample person in the target sample image.
In some embodiments of the present application, the output unit 902 is further configured to display a location map corresponding to the multiple target persons to be photographed, and receive a selection operation or a deselection operation on the location map; if the received deselection operation of the position map is carried out, grouping the plurality of target people to be shot according to the person information of the plurality of target people to be shot, and determining the target people corresponding to each group and the number of people of each group; acquiring the total number of the target people to be shot, and determining the total number of the positions of the target people to be shot according to the total number of the target people; and outputting the position graphs corresponding to the target people to be shot according to the target people corresponding to each group, the number of people in each group and the total number of rows of the positions.
In some embodiments of the present application, the output unit 902 is further configured to display a location map corresponding to the multiple target persons to be photographed, and receive a selection operation or a deselection operation on the location map; if the received operation of canceling the selection of the position map is performed, loading movable character label controls corresponding to the target characters to be shot on a position information generation interface according to the character information of the target characters to be shot; and receiving the moving operation of the movable character label control, and outputting the position graphs corresponding to the target characters to be shot according to the position information of the movable character label control when receiving a position information generation instruction.
In some embodiments of the present application, the obtaining unit 901 is further configured to obtain a preview frame image captured by a camera, and identify person information of a plurality of target persons to be captured in the preview frame image.
In some embodiments of the present application, the above-mentioned obtaining unit 901 is further configured to obtain a personal information list of a plurality of target persons to be photographed, and extract personal information of the plurality of target persons to be photographed from the personal information list.
In some embodiments of the present application, the output unit 902 is further configured to input the preview frame image into a character information recognition model, and output character information of a plurality of target characters to be captured in the preview frame image by the character information recognition model.
In some embodiments of the present application, the output unit 902 is further configured to obtain a scene image of a shooting scene; fusing the position maps corresponding to the target characters to be shot with the scene images of the shooting scene to obtain fused position maps; and displaying the fused position map, and receiving the selection operation or the deselection operation of the fused position map.
In some embodiments of the application, the position information generating apparatus may further include a training unit, configured to train a to-be-trained personal information recognition model to obtain the preset personal information recognition model.
Specifically, the training unit is configured to obtain a plurality of second sample pictures; the second sample pictures are provided with the personal information of a plurality of second sample persons marked in advance; inputting a target second sample picture in the second sample pictures into a to-be-trained personal information recognition model, and outputting personal information of a plurality of second sample persons in the target second sample picture by the to-be-trained personal information recognition model; calculating second similarity between the personal information of a plurality of second sample persons in the target second sample image output by the to-be-trained personal information identification model and the personal information of a plurality of second sample persons in the pre-labeled target second sample image, if the second similarity is smaller than a second similarity threshold, adjusting parameters of the to-be-trained personal information identification model, and training the to-be-trained personal information identification model by reusing the target second sample image until the second similarity is larger than or equal to the second similarity threshold, or training the to-be-trained personal information identification model by reusing the target second sample image when the training times for training the to-be-trained personal information identification model are larger than or equal to a first time threshold, training the to-be-trained personal information identification model by using a next target second sample image in the plurality of second sample images, and obtaining the preset character information recognition model until the total training times of the character information recognition model to be trained is greater than or equal to a second time threshold value, or the change rate of the second similarity is smaller than a change rate threshold value.
It should be noted that, for convenience and simplicity of description, the specific working process of the position information generating apparatus 900 described above may refer to the corresponding process of the method described in fig. 1 to fig. 8, and is not described herein again.
As shown in fig. 10, the present application provides a terminal for implementing the location information generating method, where the terminal may include: a processor 11, a memory 12, one or more input devices 13 (only one shown in fig. 10), and one or more output devices 14 (only one shown in fig. 10). The processor 11, memory 12, input device 13 and output device 14 are connected by a bus 15.
It should be understood that, in the embodiment of the present Application, the Processor 11 may be a Central Processing Unit (CPU), and the Processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 13 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 14 may include a display, a speaker, etc.
The memory 12 stores a computer program that can be executed by the processor 11, and the computer program is, for example, a program of a position information generation method. The processor 11 implements steps in the embodiment of the position information generating method, such as steps 101 to 102 shown in fig. 1, when executing the computer program. Alternatively, the processor 11 may implement the functions of the units in the device embodiment when executing the computer program, for example, the functions of the units 901 to 902 shown in fig. 9.
The computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 12 and executed by the processor 11 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program in the first terminal for generating the position information. For example, the computer program may be divided into an acquisition unit and an output unit, and the specific functions of each unit are as follows:
an acquisition unit configured to acquire person information of a plurality of target persons to be photographed; wherein the character information comprises one or more of name, sex, height, body width, age, position grade and clothes color information;
and the output unit is used for outputting the position information corresponding to the target people to be shot according to the person information of the target people to be shot.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiment of the present application provides a computer program product, which when running on a terminal device, enables the terminal device to implement the steps of the position information generating method in the foregoing embodiments when executed.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal are merely illustrative, and for example, the division of the above-described modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. A method for generating location information, comprising:
acquiring character information of a plurality of target characters to be shot; wherein the character information comprises one or more of name, sex, height, body width, age, position grade and clothes color information;
outputting position information corresponding to the target people to be shot according to the person information of the target people to be shot;
the position information includes a position map, and after the outputting of the position information corresponding to the plurality of target persons to be photographed, the position information includes:
displaying position graphs corresponding to the target characters to be shot, and receiving selection operation or deselection operation of the position graphs;
if the received deselection operation of the position map is carried out, grouping the plurality of target people to be shot according to the person information of the plurality of target people to be shot, and determining the target people corresponding to each group and the number of people of each group;
acquiring the total number of the target people to be shot, and determining the total number of the positions of the target people to be shot according to the total number of the target people;
and outputting the position graphs corresponding to the target people to be shot according to the target people corresponding to each group, the number of people in each group and the total number of rows of the positions.
2. The position information generating method according to claim 1, wherein said outputting position information corresponding to the plurality of target persons to be photographed based on the person information of the plurality of target persons to be photographed includes:
calculating first similarity between the person information of the target persons to be shot and the person information of the first sample persons in the prestored first sample image;
and taking a first sample image corresponding to a first similarity larger than a first similarity threshold value as a target sample image, and outputting the position information of the plurality of target persons to be shot according to the position information of each sample person in the target sample image.
3. The method of generating location information according to claim 1, wherein the displaying location maps corresponding to the plurality of target persons to be photographed and receiving a selection operation or a deselection operation of the location maps, includes:
acquiring a scene image of a shooting scene;
fusing the position maps corresponding to the target characters to be shot with the scene images of the shooting scene to obtain fused position maps;
and displaying the fused position map, and receiving the selection operation or the deselection operation of the fused position map.
4. The position information generating method according to claim 1, wherein said acquiring the personal information of the plurality of target persons to be photographed includes:
acquiring a preview frame image acquired by a camera, and identifying character information of a plurality of target characters to be shot in the preview frame image; alternatively, the first and second electrodes may be,
the method comprises the steps of obtaining a person information list of a plurality of target persons to be shot, and extracting person information of the plurality of target persons to be shot from the person information list.
5. The positional information generation method according to claim 4, wherein the identifying of the personal information of the plurality of target persons to be captured in the preview frame image comprises:
and inputting the preview frame image into a preset character information recognition model, and outputting character information of a plurality of target characters to be shot in the preview frame image by the preset character information recognition model.
6. The positional information generating method of claim 5, wherein before inputting the preview frame image into a preset character information recognition model, comprising:
training a character information recognition model to be trained to obtain the preset character information recognition model;
the training of the figure information recognition model to be trained to obtain the preset figure information recognition model comprises the following steps:
acquiring a plurality of second sample pictures; the second sample pictures are provided with the personal information of a plurality of second sample persons marked in advance;
inputting a target second sample picture in the second sample pictures into a to-be-trained personal information recognition model, and outputting personal information of a plurality of second sample persons in the target second sample picture by the to-be-trained personal information recognition model;
calculating second similarity between the personal information of a plurality of second sample persons in the target second sample image output by the to-be-trained personal information identification model and the personal information of a plurality of second sample persons in the pre-labeled target second sample image, if the second similarity is smaller than a second similarity threshold, adjusting parameters of the to-be-trained personal information identification model, and training the to-be-trained personal information identification model by reusing the target second sample image until the second similarity is larger than or equal to the second similarity threshold, or training the to-be-trained personal information identification model by reusing the target second sample image when the training times for training the to-be-trained personal information identification model are larger than or equal to a first time threshold, training the to-be-trained personal information identification model by using a next target second sample image in the plurality of second sample images, and obtaining the preset character information recognition model until the total training times of the character information recognition model to be trained is greater than or equal to a second time threshold value, or the change rate of the second similarity is smaller than a change rate threshold value.
7. A position information generating apparatus, characterized by comprising:
an acquisition unit configured to acquire person information of a plurality of target persons to be photographed; wherein the character information comprises one or more of name, sex, height, body width, age, position grade and clothes color information;
the output unit is used for outputting position information corresponding to the target people to be shot according to the person information of the target people to be shot;
the position information includes a position map, and after the outputting of the position information corresponding to the plurality of target persons to be photographed, the outputting unit is further configured to:
displaying position graphs corresponding to the target characters to be shot, and receiving selection operation or deselection operation of the position graphs;
if the received deselection operation of the position map is carried out, grouping the plurality of target people to be shot according to the person information of the plurality of target people to be shot, and determining the target people corresponding to each group and the number of people of each group;
acquiring the total number of the target people to be shot, and determining the total number of the positions of the target people to be shot according to the total number of the target people;
and outputting the position graphs corresponding to the target people to be shot according to the target people corresponding to each group, the number of people in each group and the total number of rows of the positions.
8. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201910975466.6A 2019-10-12 2019-10-12 Position information generation method, device, terminal and computer readable storage medium Active CN110620877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910975466.6A CN110620877B (en) 2019-10-12 2019-10-12 Position information generation method, device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910975466.6A CN110620877B (en) 2019-10-12 2019-10-12 Position information generation method, device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110620877A CN110620877A (en) 2019-12-27
CN110620877B true CN110620877B (en) 2021-03-26

Family

ID=68925778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910975466.6A Active CN110620877B (en) 2019-10-12 2019-10-12 Position information generation method, device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110620877B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473239B (en) * 2020-07-15 2023-10-13 青岛海信电子产业控股股份有限公司 Intelligent terminal, server and image processing method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105592261A (en) * 2014-11-04 2016-05-18 深圳富泰宏精密工业有限公司 Auxiliary shooting system and method
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN108650452A (en) * 2018-04-17 2018-10-12 广东南海鹰视通达科技有限公司 Face photographic method and system for intelligent wearable electronic
CN109068055A (en) * 2018-08-10 2018-12-21 维沃移动通信有限公司 A kind of patterning process, terminal and storage medium
CN109547694A (en) * 2018-11-29 2019-03-29 维沃移动通信有限公司 A kind of image display method and terminal device
CN109587394A (en) * 2018-10-23 2019-04-05 广东智媒云图科技股份有限公司 A kind of intelligence patterning process, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9385324B2 (en) * 2012-05-07 2016-07-05 Samsung Electronics Co., Ltd. Electronic system with augmented reality mechanism and method of operation thereof
CN102708575A (en) * 2012-05-17 2012-10-03 彭强 Daily makeup design method and system based on face feature region recognition
US10701274B2 (en) * 2014-12-24 2020-06-30 Canon Kabushiki Kaisha Controlling zoom magnification based on image size

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105592261A (en) * 2014-11-04 2016-05-18 深圳富泰宏精密工业有限公司 Auxiliary shooting system and method
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN108650452A (en) * 2018-04-17 2018-10-12 广东南海鹰视通达科技有限公司 Face photographic method and system for intelligent wearable electronic
CN109068055A (en) * 2018-08-10 2018-12-21 维沃移动通信有限公司 A kind of patterning process, terminal and storage medium
CN109587394A (en) * 2018-10-23 2019-04-05 广东智媒云图科技股份有限公司 A kind of intelligence patterning process, electronic equipment and storage medium
CN109547694A (en) * 2018-11-29 2019-03-29 维沃移动通信有限公司 A kind of image display method and terminal device

Also Published As

Publication number Publication date
CN110620877A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN107993191B (en) Image processing method and device
CN108304435B (en) Information recommendation method and device, computer equipment and storage medium
US10979624B2 (en) Methods and devices for establishing photographing template database and providing photographing recommendation information
EP3028184B1 (en) Method and system for searching images
US10679041B2 (en) Hybrid deep learning method for recognizing facial expressions
CN109635680B (en) Multitask attribute identification method and device, electronic equipment and storage medium
CN110033023B (en) Image data processing method and system based on picture book recognition
WO2020078119A1 (en) Method, device and system for simulating user wearing clothing and accessories
CN105373929B (en) Method and device for providing photographing recommendation information
CN105117399B (en) Image searching method and device
US11783192B2 (en) Hybrid deep learning method for recognizing facial expressions
CN110728188B (en) Image processing method, device, system and storage medium
CN115035581A (en) Facial expression recognition method, terminal device and storage medium
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
CN114677730A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN112200844A (en) Method, device, electronic equipment and medium for generating image
CN110620877B (en) Position information generation method, device, terminal and computer readable storage medium
CN108875496B (en) Pedestrian representation generation and representation-based pedestrian recognition
US11036970B2 (en) Hybrid deep learning method for gender classification
JP5351084B2 (en) Image recognition apparatus and image recognition method
CN115623313A (en) Image processing method, image processing apparatus, electronic device, and storage medium
WO2022266878A1 (en) Scene determining method and apparatus, and computer readable storage medium
CN111126177B (en) Method and device for counting number of people
CN114511877A (en) Behavior recognition method and device, storage medium and terminal
Chen et al. Aesthetic quality inference for online fashion shopping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant