CN110796094A - Control method and device based on image recognition, electronic equipment and storage medium - Google Patents

Control method and device based on image recognition, electronic equipment and storage medium Download PDF

Info

Publication number
CN110796094A
CN110796094A CN201911045426.8A CN201911045426A CN110796094A CN 110796094 A CN110796094 A CN 110796094A CN 201911045426 A CN201911045426 A CN 201911045426A CN 110796094 A CN110796094 A CN 110796094A
Authority
CN
China
Prior art keywords
quality
identified
frame
frame sequence
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911045426.8A
Other languages
Chinese (zh)
Inventor
池剑文
王洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN201911045426.8A priority Critical patent/CN110796094A/en
Publication of CN110796094A publication Critical patent/CN110796094A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns

Abstract

The disclosure relates to a control method and device based on image recognition, an electronic device and a storage medium. The method comprises the following steps: under the condition of receiving a control request, acquiring a frame sequence to be identified; performing image matching with an image database according to the frame sequence to be recognized to obtain a matching result; selecting a target object from a candidate object set according to the matching result; and controlling the target object to execute preset operation. Through the process, the operation to be executed can be automatically judged under the same control request according to the acquired matching result of the frame sequence to be recognized and the image database, so that the operation corresponding to the matching result is directly executed under the condition of no clear indication, the selection process of a user is skipped, the convenience of the user in operation is improved, the possibility of error is reduced, and the accuracy degree of the whole method is improved.

Description

Control method and device based on image recognition, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a control method and apparatus, an electronic device, and a storage medium based on image recognition.
Background
Image recognition is often applied to control of devices such as cabinet compartments, for example, by recognizing images and controlling the opening of cabinet compartments.
However, in practice, possibly opened compartments are used in different situations, such as opening a new compartment for storage of items, or opening a used compartment for access of items, etc. To distinguish between these situations, the user may be required to enter or select different requirements to perform different operations. This increases the complexity of the user operation and makes it easy for the user to perform an erroneous operation.
Disclosure of Invention
The present disclosure provides a control technical scheme based on image recognition.
According to an aspect of the present disclosure, there is provided an image recognition-based control method, including:
under the condition of receiving a control request, acquiring a frame sequence to be identified;
performing image matching with an image database according to the frame sequence to be recognized to obtain a matching result;
selecting a target object from a candidate object set according to the matching result;
and controlling the target object to execute preset operation.
In a possible implementation manner, before performing image matching with an image database according to the sequence of frames to be recognized and obtaining a matching result, the method further includes:
acquiring a preset quality parameter;
judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to the preset quality parameter to obtain a judgment result;
if the judgment result is yes, performing image matching according to the frame sequence to be identified to obtain a matching result;
and returning to the step of collecting the frame sequence to be identified under the condition that the judgment result is negative.
In a possible implementation manner, the determining, according to the preset quality parameter, whether the quality of the frame sequence to be identified meets a quality requirement, to obtain a determination result, includes:
selecting at least one frame to be identified from the sequence of frames to be identified according to the preset quality parameter;
and judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to each frame to be identified to obtain a judgment result.
In a possible implementation manner, the determining, according to each frame to be identified, whether the quality of the sequence of frames to be identified meets a quality requirement, to obtain a determination result, includes:
sequentially calculating a first quality score of each frame to be identified according to the sequence of the frame sequence to be identified;
obtaining whether the judgment result is negative under the condition that the first quality fraction is smaller than a first quality threshold value;
and obtaining that the judgment result is yes under the condition that the first quality fraction of each frame to be identified is not less than a first quality threshold.
In a possible implementation manner, the determining, according to each frame to be identified, whether the quality of the sequence of frames to be identified meets a quality requirement, to obtain a determination result, includes:
respectively calculating a second quality score of each frame to be identified;
obtaining a third quality score of the frame sequence to be identified according to each second quality score;
obtaining whether the judgment result is negative under the condition that the third quality fraction is smaller than a second quality threshold value;
and if the third quality fraction is not less than a second quality threshold, obtaining that the judgment result is yes.
In a possible implementation manner, the preset quality parameter includes: one or more of an orientation parameter, an angle parameter, an occlusion rate parameter, a sharpness parameter, a living confidence parameter, a size parameter, and a similarity parameter.
In a possible implementation manner, the preset quality parameter and the adjustment interface of the preset quality parameter are displayed through a preset parameter display interface.
In a possible implementation manner, the performing image matching with an image database according to the sequence of frames to be recognized to obtain a matching result includes:
selecting at least one frame to be matched from the frame sequence to be identified;
respectively carrying out similarity calculation on the frame to be matched and each image contained in the image database to obtain a similarity calculation result;
and determining the matching result as a matching pass if the similarity calculation result is greater than a similarity threshold.
In a possible implementation manner, the performing image matching with an image database according to the sequence of frames to be recognized to obtain a matching result further includes:
under the condition that the similarity calculation result is not larger than a similarity threshold, acquiring the current similarity calculation times;
determining the matching result as a matching failure if the similarity calculation number is greater than a number threshold;
and returning to the step of collecting the frame sequence to be identified under the condition that the similarity calculation times are not more than the time threshold value.
In a possible implementation manner, the selecting a target object from the candidate object set according to the matching result includes:
under the condition that the matching result is that the matching is passed, objects corresponding to the frame sequence to be recognized are taken as the target objects from the candidate objects in the non-idle state in the candidate object set;
and under the condition that the matching result is matching failure, randomly selecting a candidate object in an idle state from the candidate object set as the target object.
In one possible implementation, the control request includes an open request.
In a possible implementation manner, the controlling the target object to perform a preset operation includes:
controlling the target object to execute an opening operation under the condition that the matching result is that the matching is passed;
and under the condition that the matching result is that the matching fails, controlling the target object to execute an opening operation, and updating the image database according to the frame sequence to be recognized.
In one possible implementation, the method further includes:
and changing the state of the target object according to the preset operation.
According to an aspect of the present disclosure, there is provided an image recognition-based control apparatus including:
the frame sequence acquisition module is used for acquiring a frame sequence to be identified under the condition of receiving the control request;
the matching module is used for carrying out image matching with an image database according to the frame sequence to be recognized to obtain a matching result;
the selecting module is used for selecting a target object from the candidate object set according to the matching result;
and the control module is used for controlling the target object to execute preset operation.
In a possible implementation manner, the matching module further includes a quality determination module before the matching module, where the quality determination module is configured to:
acquiring a preset quality parameter;
judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to the preset quality parameter to obtain a judgment result;
if the judgment result is yes, performing image matching according to the frame sequence to be identified to obtain a matching result;
and returning to the step of collecting the frame sequence to be identified under the condition that the judgment result is negative.
In a possible implementation manner, the quality determination module is further configured to:
selecting at least one frame to be identified from the sequence of frames to be identified according to the preset quality parameter;
and judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to each frame to be identified to obtain a judgment result.
In a possible implementation manner, the quality determination module is further configured to:
sequentially calculating a first quality score of each frame to be identified according to the sequence of the frame sequence to be identified;
obtaining whether the judgment result is negative under the condition that the first quality fraction is smaller than a first quality threshold value;
and obtaining that the judgment result is yes under the condition that the first quality fraction of each frame to be identified is not less than a first quality threshold.
In a possible implementation manner, the quality determination module is further configured to:
respectively calculating a second quality score of each frame to be identified;
obtaining a third quality score of the frame sequence to be identified according to each second quality score;
obtaining whether the judgment result is negative under the condition that the third quality fraction is smaller than a second quality threshold value;
and if the third quality fraction is not less than a second quality threshold, obtaining that the judgment result is yes.
In a possible implementation manner, the preset quality parameter includes: one or more of an orientation parameter, an angle parameter, an occlusion rate parameter, a sharpness parameter, a living confidence parameter, a size parameter, and a similarity parameter.
In a possible implementation manner, the preset quality parameter and the adjustment interface of the preset quality parameter are displayed through a preset parameter display interface.
In one possible implementation, the matching module is configured to:
selecting at least one frame to be matched from the frame sequence to be identified;
respectively carrying out similarity calculation on the frame to be matched and each image contained in the image database to obtain a similarity calculation result;
and determining the matching result as a matching pass if the similarity calculation result is greater than a similarity threshold.
In one possible implementation, the matching module is further configured to:
under the condition that the similarity calculation result is not larger than a similarity threshold, acquiring the current similarity calculation times;
determining the matching result as a matching failure if the similarity calculation number is greater than a number threshold;
and returning to the step of collecting the frame sequence to be identified under the condition that the similarity calculation times are not more than the time threshold value.
In one possible implementation, the selecting module is configured to:
under the condition that the matching result is that the matching is passed, objects corresponding to the frame sequence to be recognized are taken as the target objects from the candidate objects in the non-idle state in the candidate object set;
and under the condition that the matching result is matching failure, randomly selecting a candidate object in an idle state from the candidate object set as the target object.
In one possible implementation, the control request includes an open request.
In one possible implementation, the control module is configured to:
controlling the target object to execute an opening operation under the condition that the matching result is that the matching is passed;
and under the condition that the matching result is that the matching fails, controlling the target object to execute an opening operation, and updating the image database according to the frame sequence to be recognized.
In a possible implementation manner, the apparatus further includes a state changing unit, and the state changing unit is configured to:
and changing the state of the target object according to the preset operation.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the control method based on image recognition described above is performed.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described image recognition-based control method.
In the embodiment of the disclosure, a frame sequence to be recognized is obtained under the condition that a control request is received, image matching is performed according to the frame sequence to be recognized and an image database to obtain a matching result, a target object is selected from candidate objects according to the matching result, and then the target object is controlled to execute preset operation according to the control request and the matching result. Through the process, the operation to be executed can be automatically judged under the same control request according to the acquired matching result of the frame sequence to be recognized and the image database, so that the operation corresponding to the matching result is directly executed under the condition of no clear indication, the selection process of a user is skipped, the convenience of the user in operation is improved, the possibility of error is reduced, and the accuracy degree of the whole method is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a flowchart of a control method based on image recognition according to an embodiment of the present disclosure.
FIG. 2 is a schematic diagram of a preset parameter presentation interface according to an embodiment of the present disclosure.
Fig. 3 illustrates a block diagram of a control device based on image recognition according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of an application example according to the present disclosure.
Fig. 5 shows a schematic diagram of an application example according to the present disclosure.
Fig. 6 shows a schematic diagram of an application example according to the present disclosure.
Fig. 7 shows a schematic diagram of an application example according to the present disclosure.
FIG. 8 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 9 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a control method based on image recognition according to an embodiment of the present disclosure, which may be applied to a control device based on image recognition, which may be a terminal device, a server, or other processing device. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In one example, the control method based on image recognition provided in the embodiments of the present disclosure may be applied to control a storage device, where the storage device may be a device such as a storage cabinet, an express delivery cabinet, or the like, which is composed of a plurality of independent storage spaces.
In some possible implementations, the image recognition-based control method may also be implemented by a processor calling computer-readable instructions stored in a memory.
As shown in fig. 1, the control method based on image recognition may include:
in step S11, when a control request is received, a frame sequence to be recognized is acquired.
In the above-described embodiments, the specific request content of the control request may be flexibly determined according to the specific implementation form of the target object to be controlled. The above disclosure has proposed that the control method may be applied to a scenario of controlling a cabinet grid such as a storage cabinet or an express delivery cabinet, and therefore, in a possible implementation manner, the control request may include an opening request.
Through the control request including the opening request, two operation requests of storage and taking of storage devices such as express cabinets or storage cabinets can be combined into the same opening request, so that the complexity of user operation can be reduced, and the possibility of misoperation of the user is also reduced.
In the above disclosed embodiment, the frame sequence to be identified, the specific obtaining mode and the setting position of the image acquisition device may be obtained through image acquisition devices such as a camera, which are not limited in the embodiment of the present disclosure, and may be flexibly determined according to actual situations. For example, the frame sequence to be identified can be obtained by fixing a camera above the center of the express cabinet, and the frame sequence to be identified can also be obtained by arranging the camera around the human-computer interaction interface of the express cabinet.
In one possible implementation, the frame sequence to be recognized may include a sequence of facial image frames to be recognized, that is, the frames included in the acquired sequence of frames to be recognized may be facial image frames. The number of frames included in the frame sequence to be identified is not limited in the embodiments of the present disclosure. In one possible implementation, the number of frames acquired at each acquisition may be indirectly set by setting the time at which the frames are acquired.
And step S12, performing image matching with an image database according to the frame sequence to be recognized to obtain a matching result.
The image database may store a plurality of images, and the specific number of the stored images is not limited in the embodiment of the present disclosure. How to acquire the images in the image database can be flexibly determined according to actual conditions, and in a possible implementation manner, for a scene of controlling the express delivery cabinet, the data stored in the image database can be the images of the user collected when the user performs storage operation; in a possible implementation manner, for a scene of controlling an express delivery cabinet, data stored in an image database of the scene may also be an image of a person corresponding to an article, which is obtained by the image database by querying a cloud server or the like after an image of a user bound to the article is input into the image database together when the article is stored in the express delivery cabinet, such as by a courier or the like, or information bound to the article is input by the courier.
In the embodiment of the present disclosure, the matching process and manner can be flexibly determined according to the actual situation, and the specific process can refer to each subsequent disclosed embodiment, and is not expanded here.
Step S13, selecting a target object from the candidate object set according to the matching result.
The candidate set may include one or more candidates, and the number of specifically included candidates is not limited in the embodiment of the present disclosure. The candidate object may be a control object of the control method, and it has been proposed in the foregoing disclosure that the method proposed in the embodiments of the present disclosure may be applied to a scenario in which a cabinet lattice of a storage cabinet or an express delivery cabinet is controlled, and therefore, in a possible implementation manner, the candidate object may be each cabinet lattice of the controlled express delivery cabinet, and the candidate object set may be an express delivery cabinet composed of the cabinet lattices.
Specifically, how to select the target object according to the matching result, the selection mode may be adapted to the matching result, and the specific process may refer to each of the following disclosed embodiments, and is not expanded here.
In step S14, the control target object performs a preset operation.
The specific operation content of the preset operation executed by the control target object can be flexibly determined according to the control request, and is not limited to the following disclosed embodiments. In a possible implementation manner, since the embodiment of the present disclosure may be applied to control cabinets such as an express cabinet, and the control request may include a request such as opening, a preset operation performed by a control target object may include opening operation, and how to control which preset operation is performed by the control target object under which condition may refer to each of the following disclosed embodiments, and the following disclosed embodiments are not expanded first.
In one possible implementation, when data matching is performed between the frame sequence to be recognized and the image in the image database, a frame with the best quality can be selected from the frame sequence to be recognized for image matching. However, in an actual scene, due to the complexity and diversity of the problems of the camera and the environment, the quality of the selected frame may not meet the matching or identification requirements, thereby causing the problems of high error identification rate or instability and the like.
In order to reduce the occurrence of this situation, the control method based on image recognition proposed by the embodiment of the present disclosure may further include, before step S12, a process of performing quality screening on the sequence of frames to be recognized, where the process of quality screening may be denoted as step S120, and in a possible implementation, step S120 may include:
step S1201, acquiring a preset quality parameter.
Step S1202, judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to the preset quality parameter, and obtaining a judgment result.
And step S1203, if the judgment result is yes, performing image matching according to the frame sequence to be identified to obtain a matching result.
And step S1204, returning to the step of collecting the frame sequence to be identified under the condition that the judgment result is negative.
In the above-described embodiments, the specific parameter form and the setting value of the preset quality parameter can be flexibly determined according to the actual situation, and are not limited to the following embodiments. In one possible implementation, the presetting of the quality parameter may include: one or more of an orientation parameter, an angle parameter, an occlusion rate parameter, a sharpness parameter, a living confidence parameter, a size parameter, and a similarity parameter.
Through setting up the parameter that contains orientation parameter, angle parameter, rate of shelter from parameter and definition parameter multiform, and can be by oneself with the nimble combination of these parameter forms when practical application, through this process, can effectively promote the flexibility of treating the discernment frame sequence and carrying out the quality screening, also can make the frame sequence of treating the discernment of finally selecting have higher reliability to make the result of carrying out image matching based on this frame sequence of treating the discernment more accurate, promote the reliability and the control accuracy of whole control process.
It can be seen from the above disclosure that the implementation form of the preset quality parameter is not limited, and in practical applications, the setting value of the preset quality parameter and the included parameter combination may need to be modified according to the change of the requirement or other reasons.
The preset parameter display interface may be an interface displayed by a controller of the control method, and in a possible implementation manner, the preset parameter display interface may be an interface displayed by a developer or a user facing the control method, the developer may modify a value or a combination form of the preset quality parameter correspondingly through an adjustment interface of the preset quality parameter in the preset parameter display interface, and in an example, the developer may also modify a judgment condition or other content that may be preset through the preset parameter display interface. Fig. 2 is a schematic diagram of a preset parameter display interface according to an embodiment of the present disclosure, and it can be seen from the diagram that, through the preset parameter display interface, a developer may modify a numerical value of a preset quality parameter, and may also modify an operation that may be correspondingly executed when different quality determination results are obtained based on the preset quality parameter.
The preset quality parameters and the adjusting interface of the preset quality parameters are displayed through the preset parameter display interface, so that the preset quality parameters can be adjusted more simply and intuitively, and the flexibility of the realization based on the image recognition control method provided by the embodiment of the disclosure can be improved.
Through the above process, in the embodiment of the present disclosure, it can be determined whether the quality of the frame sequence to be identified meets the quality requirement by judging the frame sequence to be identified and the read preset quality parameter. If the quality of the frame sequence to be recognized meets the quality requirement, the process may proceed to step S12 of performing image matching between the frame sequence to be recognized and the image database, and if the quality of the frame sequence to be recognized does not meet the quality requirement, the process may return to step S11 of acquiring the frame sequence to be recognized, i.e., return to acquiring a new frame sequence to be recognized until the frame sequence to be recognized meeting the quality requirement is acquired, and is used for the image matching in step S12.
It should be noted that, in a possible implementation manner, the two steps of step S1202 and step S1204 may be executed in parallel, that is, in a possible implementation manner, while determining whether the quality of the current frame sequence to be identified meets the quality requirement, a group of frame sequences to be identified may be simultaneously collected and the collected frame sequences to be identified may be cached, so that when the determination result of the current frame sequence to be identified does not meet the quality requirement, the cached frame sequence to be identified may be directly utilized to continuously determine whether the quality requirement is met, so that the computing capability of the multi-core CPU or the multi-core GPU may be fully utilized to complete the process of image collection and quality screening in parallel, and the resource utilization rate is improved while the efficiency of the entire control method is also improved.
Whether the quality of the frame sequence to be identified meets the quality requirement is judged according to the preset quality parameters, the frame sequence to be identified is continuously collected when the quality requirement is not met until the frame sequence to be identified meeting the quality requirement is collected for subsequent image matching, and through the process, the reliability of the frame sequence to be identified entering the image matching process can be effectively improved, so that the accuracy of an image matching result is improved, and the control precision and efficiency of the whole control method are improved.
Specifically, how to judge whether the quality of the frame sequence to be identified meets the quality requirement according to the preset quality parameter may be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments. In one possible implementation, step S1202 may include:
step S12021, selecting at least one frame to be recognized from the sequence of frames to be recognized by presetting the quality parameter.
Step S12022, determining whether the quality of the frame sequence to be recognized meets the quality requirement according to each frame to be recognized, and obtaining a determination result.
The selection mode can be set according to the actual situation, how to select at least one frame to be identified from the sequence of frames to be identified. For example, N frames may be selected from the frame sequence to be identified at certain frame intervals for performing the quality determination in the subsequent step S12022, the value of N is not limited herein, and is not limited to the following disclosed embodiment, in one example, 1 frame may be selected from the first frame at 6 frame intervals in the frame sequence to be identified containing 30 frames, and 5 frames are selected as the frames to be identified in total; in one example, the frame sequence to be identified may be divided into 5 groups of frame subsequences to be identified by taking every 6 frames as a group, and then 1 frame with the best quality is selected from each frame subsequence to be identified, and 5 frames are selected as frames to be identified; in one example, the best quality frame may be directly selected from the frame sequence to be identified as the frame to be identified.
In the embodiment of the disclosure, frames in the frame sequence to be recognized may be filtered through the preset quality parameter, so as to filter out frames that do not meet the requirement of the preset quality parameter in the frame sequence to be recognized, and then obtain at least one frame to be recognized, and then perform further quality judgment. Further, the preset quality parameters in the embodiments of the present disclosure may include a set of preset quality parameters, or may include multiple sets of preset quality parameters, for example, in one possible implementation, each frame in the sequence of frames to be identified may be filtered according to the requirement of the set of preset quality parameters through a set of preset quality parameters; in a possible implementation manner, frames in the sequence of frames to be recognized may also be sequentially screened according to the order through a plurality of groups of preset quality parameters, in one example, parameter thresholds set in the plurality of groups of preset quality parameters may be increasingly strict, so that the quality of the selected frames to be recognized is gradually increased, thereby facilitating subsequent quality judgment. The preset parameter display interface shown in fig. 2 is taken as an example for illustration, and it can be seen from the figure that the preset parameter display interface in the embodiment of the present disclosure currently displays the preset quality parameter of the first frame, each frame in the frame sequence to be recognized is sequentially screened by the preset quality parameter, so that the first frame meeting the requirement of the preset quality parameter in the frame sequence to be recognized can be obtained and used as the first frame for subsequent quality judgment, after the first frame is obtained, the sliding option of the second frame behind the first frame can be seen in the figure, so that the frames behind the first frame in the frame sequence to be recognized can be respectively screened by aiming at the preset quality parameter combination of the second frame to select the second frame with higher quality for subsequent quality judgment, and so on.
The method comprises the steps of selecting at least one frame to be identified from a frame sequence to be identified through presetting quality parameters, then judging the quality of the frame sequence to be identified according to each frame to be identified, decomposing the whole process of judging the quality of the frame sequence to be identified into a quality judgment process which is easier to realize and is carried out based on each frame image to be identified through the process, and filtering out frame images which do not accord with the most basic quality requirement, so that the data amount required to be calculated when the quality of the frame sequence to be identified is judged can be reduced, the reliability of a quality judgment result can be improved, and the reliability of a follow-up image matching result and the precision of the whole control process are improved.
Specifically, how to judge the quality of the whole frame sequence to be recognized according to each frame to be recognized by combining with the preset quality parameters can be flexibly selected according to actual conditions. In one possible implementation, step S12022 may include:
and sequentially calculating the first quality score of each frame to be identified according to the sequence of the frame sequence to be identified.
And obtaining whether the judgment result is negative under the condition that the first quality fraction is smaller than the first quality threshold value.
And under the condition that the first quality fraction of each frame to be identified is not less than the first quality threshold, obtaining a judgment result as yes.
As can be seen from the above process, in one possible implementation, the quality determination may be made on a per frame to be identified basis, sequentially judging whether the quality of each frame to be identified meets the quality requirement according to the sequence of the frame arrangement in the frame sequence to be identified, if the quality of each frame meets the quality requirement in the sequential judgment process, the quality of the frame sequence to be identified can be considered to meet the quality requirement, otherwise, as can be seen from the above-disclosed embodiments, since, when a frame to be identified is selected from a sequence of frames to be identified by means of a plurality of sets of preset quality parameters, it is possible to identify the frame by means of a stepwise increasing of the requirements between the plurality of sets of preset quality parameters, the quality of the obtained frame to be identified is gradually increased, so if the judgment result of a certain frame is that the quality requirement is not met, the quality judgment of the frame to be identified after the frame can be stopped, and the quality of the whole frame sequence to be identified is determined not to meet the quality requirement.
In a possible implementation manner, in the embodiment of the present disclosure, whether each frame to be identified meets the quality requirement is determined by calculating a quality score of each frame to be identified and comparing the quality score with the first quality threshold. The specific quality score calculation method may be set according to the actual situation of the frame to be identified and the empirical value, and is not limited to the following disclosure embodiments. As proposed in the above-mentioned embodiments, the preset quality parameter may include many kinds of parameter values, so that, in the process of calculating the quality score, the quality score of the frame to be identified may be obtained by performing combined calculation on the parameter values, so that the quality score may evaluate the quality of the frame to be identified as a whole.
It should be noted that, because the quality of the frame to be identified may be gradually increased, and when the quality is determined, the first quality score of the frame to be identified needs to be compared with the first quality threshold to determine whether the quality of the frame to be identified meets the quality requirement, for different frames to be identified, the specific setting value of the first quality threshold may also be changed correspondingly, and the setting may be flexibly performed according to the actual situation.
In particular, the foregoing implementation may be illustrated by way of example, and in one example, two frames to be identified may be selected from a group of frames to be identified through two sets of preset quality parameters, and then through the two sets of preset quality parameters, and then a first quality score of the two frames to be identified may be calculated.
In one example, the manner of summarizing and selecting two frames to be identified from a group of frame sequences to be identified may be that, first, screening is performed through a first group of preset quality parameter combinations, and the screening condition may be that: the pitch angle pitch is 25, the yaw angle yaw is 20, and the rotation angle roll is 20, so that a frame to be recognized can be selected from the frame sequence to be recognized, the absolute value of which does not exceed the above-mentioned screening condition.
Further, after the frame to be recognized with the angle meeting the requirement is screened out, the frame to be recognized may be further screened out through the remaining parameters in the first group of preset quality parameter combinations, in the present disclosure, the first group of preset quality parameters may further set a sharpness threshold sharpness equal to 0.6, an occlusion threshold oclimit equal to 0.4, and a living body confidence coefficient equal to 0.6, so that the frame to be recognized with the detection value (including the sharpness value, the occlusion value, the living body confidence value, and the like) greater than the threshold value may be further screened out from the frame to be recognized with the angle meeting the requirement. In the disclosed example, through the first set of preset quality parameters, an image frame a may be screened out as a frame to be recognized, where the image frame a satisfies sharpness 0.82, occupancy 0.26, and liveness 0.89.
After the image frame a is obtained as the first frame to be identified of the frame sequence to be identified, the second frame to be identified can be screened out from the frame located behind the image frame a in the frame sequence to be identified through the second group of preset quality parameters, and is marked as the image frame B. In an example of the disclosure, the second set of preset quality parameters may be: the picture frame B screened by the method satisfies the conditions of picture 20, yaw 15, roll 15, sharpness 0.8, ocprecipitation 0.3, and liveness 0.8, ocprecipitation 1, and liveness 3.
After obtaining the image frame a and the image frame B, a first quality score of the image frame a may be first calculated, in the example of the present disclosure, the first quality score is calculated by using three parameters, namely, sharpness, occlusion and liveness, and the weight of sharpness is set to 2, the weight of occlusion is set to 1, and the weight of liveness is set to 3, respectively. In an application example of the present disclosure, the value of the first quality score may satisfy:
a first mass fraction ═ sharpness × 2+ (1-encapsulation) × 1+ liveness × 3
Based on the above calculation method, it can be seen that the first quality score of the image frame a is 0.82 × 2+ (1-0.26) +0.89 × 3 ═ 5.05, in the present disclosure example, the first quality threshold of the image frame a can be set to 5, and then the image frame a meets the quality requirement, and at this time, the first quality score of the image frame B can be further calculated, and it can be seen that the first quality score of the image frame B is 0.91 × 2+ (1-0.31) +0.92 × 3 ═ 5.27, in the present disclosure example, the first quality threshold of the image frame B can be set to 5.2, and then the image frame B also meets the quality requirement, and at this time, since both frames to be identified meet the quality requirement, it can be considered that the frame sequence to be identified also meets the quality requirement, and the process can proceed to step S12 to perform image matching.
Further, in the actual control process, when the quality of the image frame a is judged to meet the quality requirement, the image frame a is directly used for image matching, and whether the quality of the image frame B meets the quality requirement is judged at the same time, so that the parallelism of the whole control process can be further improved, and the control efficiency is improved.
The first quality score of each frame to be identified is calculated in sequence according to the sequence of the frame sequence to be identified, then under the condition that the first quality score of each frame to be identified is not less than a first quality threshold, the quality of the frame sequence to be identified is judged to meet the quality requirement, through the process, the characteristic that the quality of the frame to be identified is gradually increased is utilized when the quality is judged, when the first quality score of a certain frame is less than the first quality threshold, the quality of the frame sequence to be identified is directly determined to not meet the quality requirement, the subsequent frame with higher quality is not required to be further judged, the calculated amount of the whole quality judgment process is reduced, the judgment efficiency is improved, and then the efficiency of the whole control process is improved.
In the above-mentioned disclosed embodiment, it is also proposed that, in a possible implementation manner, each selected frame to be identified may be obtained by screening through the same set of preset quality parameters, so that there may be no possibility that the quality of the frames to be identified gradually increases, and at this time, if the quality of the frames to be identified is determined through the quality determination process proposed in the above-mentioned disclosed embodiment, the quality determination result may be inaccurate. At this time, the quality of the frame sequence to be identified may be determined by other implementations, and therefore, in one possible implementation, step S12022 may include:
and respectively calculating a second quality score of each frame to be identified.
And obtaining a third quality score of the frame sequence to be identified according to each second quality score.
And under the condition that the third quality fraction is smaller than the second quality threshold value, judging whether the result is negative.
And if the third quality fraction is not less than the second quality threshold, the judgment result is yes.
It can be seen from the above process that, in a possible implementation manner, the manner of performing quality judgment based on each frame to be recognized may be that a second quality score of each frame to be recognized is calculated, and then a third quality score reflecting the overall quality of the frame sequence to be recognized is obtained based on the second quality scores, if the third quality score is greater than a second quality threshold, it may be considered that the quality of the frame sequence to be recognized meets the quality requirement, otherwise, it is considered that the quality of the frame sequence to be recognized does not meet the quality requirement, and a new frame sequence to be recognized needs to be acquired for subsequent image matching.
Specifically, how to calculate the second quality score of each frame to be identified may refer to the above-mentioned embodiments, and how to calculate the used quality preset parameters, the weights, and the like may be the same as or different from the first quality score, which is not described herein again.
After the second quality scores of each frame to be identified are obtained, how to calculate the third quality scores based on the second quality scores can be flexibly determined. In a possible implementation manner, an average value of the second quality scores may be used as the third quality score, and in a possible implementation manner, the second quality scores may also be weighted and averaged to obtain the third quality score, and how to set the weight is not limited in the embodiment of the present disclosure.
Similarly, the setting value of the second quality threshold is not limited in the embodiment of the present disclosure, and may be flexibly selected according to the actual situation.
It should be noted that, in the embodiments of the present disclosure, the descriptions of the quality score and the quality threshold, such as the first, the second, and the third, are only used to distinguish different quality scores and quality thresholds that occur in different implementations, the first, the second, and the third do not limit the relationship between the quality scores and the quality thresholds, and specifically, the quality scores are calculated in the same or different manners, and the quality thresholds have the same or different values, which can be flexibly selected according to actual situations.
The second quality score of each frame to be identified is calculated respectively, then the third quality score of the frame sequence to be identified is obtained according to the second quality score, whether the quality of the frame sequence to be identified meets the quality requirement is judged by comparing the third quality score with the second quality threshold, and through the process, the third quality score can be obtained by fully utilizing the selected quality parameter value of each frame to be identified, so that the judgment result of judging the frame sequence to be identified has comprehensiveness and representativeness.
After the quality judgment is performed on the frame sequence to be recognized, the image matching may be performed on the frame sequence to be recognized that meets the quality requirement through step S12, so as to obtain a matching result. The above disclosed embodiment has proposed that the implementation of step S12 is not limited, and in one possible implementation, step S12 may include:
step S121, selecting at least one frame to be matched from the frame sequence to be identified.
And S122, respectively carrying out similarity calculation on the frame to be matched and each image contained in the image database to obtain a similarity calculation result.
In step S123, in the case where the similarity calculation result is greater than the similarity threshold, the matching result is determined as a pass.
In the above-described embodiments, at least one frame to be matched is selected from the frame sequence to be identified, and the selection manner and the selection result are not limited in the embodiments of the present disclosure. In one possible implementation manner, as mentioned in the above-mentioned disclosed embodiment, when determining the quality of the frame sequence to be recognized, some frames to be recognized may be selected from the frame sequence to be recognized first, and therefore, in one example, the frames to be recognized may be directly used as frames to be matched for image matching; in one example, since the quality score of each frame to be identified needs to be calculated when performing quality judgment based on the frames to be identified, one or several frames with the highest quality may be selected as the frames to be matched according to the order of the quality of the frames to be identified from high to low. In a possible implementation, one or several frames may be randomly selected directly from the sequence of frames to be identified for image matching.
After the frame to be matched is selected, the frame to be matched and each image in the image database can be respectively compared to calculate the similarity between the frame to be matched and each image in the image database, so that a similarity calculation result is obtained. The calculation method of the similarity is not limited in the embodiment of the present disclosure, and any method that can compare two images to determine the degree of similarity therebetween can be used as the implementation manner of step S122.
After the similarity calculation result is obtained, whether the frame to be matched and the image database are matched to pass or not can be judged, the specific process can be that the similarity of the frame to be matched and each image in the image database is respectively judged, when a certain image exists in the image database and the similarity of the certain image and the frame to be matched exceeds a similarity threshold value, the current recognition result can be considered, the image data corresponding to the current recognition result is stored in the image database, and the frame sequence to be recognized passes the matching.
The specific value of the similarity threshold may be flexibly set according to actual situations, and is not limited to the following disclosed embodiments, in one example, the similarity threshold may be set to be 0.85, that is, a frame to be matched whose similarity is higher than 0.85 may be considered as a pass match.
Through the process, the comprehensive comparison between the frame sequence to be recognized and each image in the image database can be realized, the matching result can have higher accuracy by traversing the image database, and when the matching is passed, which image in the image database is matched with the frame sequence to be recognized can be simultaneously confirmed, so that the target object can be conveniently determined during subsequent control.
Since the result of the similarity calculation may have a deviation and even if the frame to be matched is selected based on the frame sequence to be identified obtained by the quality screening, the result of the similarity calculation may be inaccurate, and therefore, when the result of the similarity calculation is not greater than the similarity threshold, matching may be performed several times more to reduce the possibility that the matching result is inaccurate. Therefore, in one possible implementation, step S12 may further include:
in step S124, when the similarity calculation result is not greater than the similarity threshold, the current similarity calculation frequency is acquired.
In step S125, in the case where the similarity calculation count is greater than the count threshold, the matching result is determined as a matching failure.
And step S126, returning to the step of collecting the frame sequence to be identified under the condition that the similarity calculation times are not more than the time threshold value.
The number of similarity calculations may be obtained by a statistical method, and in a possible implementation manner, a variable with an initial value of 0 may be set, and each time the similarity calculation is performed, the variable may be incremented by 1, so as to count the number of similarity calculations.
It can be seen from the foregoing disclosure that, when the similarity calculation result is not greater than the similarity threshold, the current similarity calculation number may be read, if the similarity calculation number is not greater than the similarity threshold, the process may return to the step of collecting the frame sequence to be recognized in step S11, and the above processes of quality screening, similarity calculation and the like are repeated to re-determine whether the similarity between the image and the frame sequence to be recognized in the image database is greater than the similarity threshold, if the similarity calculation number is greater than the similarity threshold, it may be determined that the similarity between the image and the frame sequence to be recognized does not exist in the image database, that is, the image corresponding to the frame sequence to be recognized is not stored in the image database in advance, and at this time, the matching result is a matching failure, and a subsequent control process is waited.
The specific value of the time threshold may also be set according to the actual situation, and in an example, the time threshold may be set to 5, that is, if 5 times of similarity calculation are performed, no image with a similarity meeting the similarity threshold is found in the image database, it may be considered that the frame sequence to be recognized does not match the images in the image database.
Through the process, the possibility of error occurrence in matching can be reduced through multiple times of similarity calculation, the accuracy of the image matching result is enhanced, and the accuracy of the control result is improved.
After the matching result is obtained, the target object may be selected based on the matching result and step S13, and how to select may be flexibly determined according to the matching result. In one possible implementation, step S13 may include:
step S131, when the matching result is that the matching is passed, selecting an object corresponding to the frame sequence to be identified from the candidate objects in the non-idle state in the candidate object set as the target object.
And step S132, under the condition that the matching result is that the matching fails, randomly selecting the candidate object in the idle state from the candidate object set as the target object.
In the foregoing disclosure, it has been proposed that an implementation form of a candidate object set may be flexibly determined by an actual application scenario of the present disclosure, and since in a possible implementation manner, the method proposed by the present disclosure may be applied to control of a storage device such as an express cabinet, and a controlled candidate object set may be an express cabinet formed by multiple cabinets, and in an actual application, the cabinets may have two states, one is a non-idle state occupied by a stored article, and the other is an idle state not occupied by any article, accordingly, corresponding to different matching results, a candidate object in which a state is specifically selected as a target object, and how to select the candidate object may be flexibly changed.
As can be seen from step S131, when the matching result is that the matching is passed, it may be stated that the user identified at this time has related image information stored in the image database, which may also be indirectly stated that the user has previously operated the express delivery cabinet or his information is stored in the express delivery cabinet, that is, the user has finished storing, and the requirement at this time should be to open the previous storage cabinet to implement the operation of fetching the item, so that the candidate object in the non-idle state corresponding to the image may be found through the image matched with the user in the image database at this time, and this object is the cabinet used when the user has previously performed the storage operation, and thus may be used as the target object.
As can be seen from step S132, when the matching result is that the matching fails, it may indicate that the user is identified at this time, the image information related to the user is not stored in the image database, which may also be indirectly indicated that the user has not operated the express delivery cabinet before, that is, the user most probably needs to store the article, and therefore, at this time, one of the candidate objects in the idle state may be randomly selected as the target object.
By selecting candidate objects in different states as target objects under different matching results, the process can adaptively determine the operation to be executed according to the matching condition of the frame sequence to be identified when the same operation request is received, and determine the target object corresponding to the operation, so that the automation degree of control is improved, and the convenience of the whole control method when the control method faces to a user is improved.
Further, after the target object is determined, the target object may be controlled to perform a preset operation through step S14. In one possible implementation, step S14 may include:
in step S141, if the matching result is that the matching is passed, the control target object performs an open operation.
And step S142, controlling the target object to execute starting operation and updating the image database according to the frame sequence to be recognized under the condition that the matching result is that the matching is failed.
It has been seen through the foregoing disclosure embodiments that, for a scene in which the express delivery cabinet is controlled to be opened, whether the matching result is pass or fail, and whether the corresponding object is stored or retrieved when the matching result is failed, it is necessary to control the target object to perform the opening operation, but when the matching result is failed, it may be indirectly stated that the corresponding object is the storage operation at this time.
Particularly, how to update the image database according to the frame sequence to be recognized is not limited in implementation form. In a possible implementation manner, one or more frames can be randomly selected directly from the frame sequence to be identified and stored in an image database; in a possible implementation, since the quality scores of some frames in the frame sequence to be recognized may be calculated in the preceding step, one or more frames with the highest quality scores may be selected and saved in the image database. After the images are stored, the stored images can be further bound with corresponding target objects (namely, express cabinet grids), so that subsequent operations such as article taking are facilitated.
The target object is controlled to be opened when the matching is passed, and the image database is controlled to be opened and updated when the matching is failed, so that the contact between the image database and the control object can be further established, and the convenience and the accuracy of the whole control process are improved.
Further, in a possible implementation manner, the control method provided in the embodiment of the present disclosure may further include:
and changing the state of the target object according to preset operation.
It can be seen from the above disclosure embodiments that the control method provided in the embodiments of the disclosure can be applied to control of objects such as an express cabinet or a storage cabinet, for example, the express cabinet is controlled to be opened according to a result of face recognition so as to store an article, or the express cabinet is controlled to be opened according to a result of face recognition so as to facilitate a user to take out an article. However, after being opened, the state of the corresponding opened bay may be changed, for example, in a possible implementation manner, after one cabinet bay in an idle state is opened for storing the article, the cabinet bay cannot be opened by other users for storing after being closed, and thus, the state of the target object may be changed from the idle state to a non-idle state. In one possible implementation, after a cabinet bay in a non-idle state is opened for article removal, a user removing an article may no longer need to use the cabinet bay, at which time the state of the cabinet bay may be changed from the non-idle state to an idle state.
In order to distinguish whether the cabinet lattice in the non-idle state still needs to be used continuously after being opened, in a possible implementation manner, after the cabinet lattice in the non-idle state is opened, a request for inquiring whether the cabinet lattice still needs to be used continuously is sent out, and whether the state of the cabinet lattice is modified or not is judged according to a response signal based on the request.
The state of the target object is changed according to the preset operation, so that the target object can be released in time after the user requirement is completed, and the controlled object can be continuously and circularly used.
Fig. 3 shows a block diagram of a control device based on image recognition according to an embodiment of the present disclosure. As shown, the image recognition-based control device 20 may include:
a frame sequence obtaining module 21, configured to obtain the frame sequence to be identified in the case that the control request is received.
And the matching module 22 is used for performing image matching with the image database according to the frame sequence to be recognized to obtain a matching result.
And the selecting module 23 is configured to select a target object from the candidate object set according to the matching result.
And the control module 24 is used for controlling the target object to execute preset operation.
In a possible implementation manner, the matching module further includes a quality determination module before the matching module, and the quality determination module is configured to: acquiring a preset quality parameter; judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to a preset quality parameter to obtain a judgment result; if the judgment result is yes, performing image matching according to the frame sequence to be identified to obtain a matching result; and returning to the step of collecting the frame sequence to be identified under the condition that the judgment result is negative.
In one possible implementation, the quality determination module is further configured to: selecting at least one frame to be identified from a frame sequence to be identified through a preset quality parameter; and judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to each frame to be identified to obtain a judgment result.
In one possible implementation manner, the quality determination module is further configured to: sequentially calculating a first quality score of each frame to be identified according to the sequence of the frame sequence to be identified; under the condition that the first quality fraction is smaller than a first quality threshold value, whether a judgment result is negative is obtained; and under the condition that the first quality fraction of each frame to be identified is not less than the first quality threshold, obtaining a judgment result as yes.
In one possible implementation manner, the quality determination module is further configured to: respectively calculating a second quality score of each frame to be identified; obtaining a third quality score of the frame sequence to be identified according to each second quality score; under the condition that the third quality fraction is smaller than the second quality threshold, whether the judgment result is negative is obtained; and if the third quality fraction is not less than the second quality threshold, the judgment result is yes.
In one possible implementation, the presetting the quality parameter includes: one or more of an orientation parameter, an angle parameter, an occlusion rate parameter, a sharpness parameter, a living confidence parameter, a size parameter, and a similarity parameter.
In one possible implementation manner, the preset quality parameters and the adjustment interface of the preset quality parameters are displayed through a preset parameter display interface.
In one possible implementation, the matching module is configured to: selecting at least one frame to be matched from the frame sequence to be identified; respectively carrying out similarity calculation on the frame to be matched and each image contained in the image database to obtain a similarity calculation result; and determining the matching result as a matching pass if the similarity calculation result is greater than the similarity threshold.
In one possible implementation, the matching module is further configured to: under the condition that the similarity calculation result is not greater than the similarity threshold, acquiring the current similarity calculation times; determining the matching result as matching failure under the condition that the similarity calculation times are larger than a time threshold value; and returning to the step of collecting the frame sequence to be identified under the condition that the similarity calculation times are not more than the time threshold value.
In one possible implementation, the selection module is configured to: under the condition that the matching result is that the matching is passed, objects corresponding to the frame sequence to be identified are taken as target objects from the candidate objects in the non-idle state in the candidate object set; and under the condition that the matching result is that the matching fails, randomly selecting the candidate object in an idle state from the candidate object set as the target object.
In one possible implementation, the control request comprises an open request.
In one possible implementation, the control module is configured to: controlling the target object to execute starting operation under the condition that the matching result is that the matching is passed; and under the condition that the matching result is that the matching fails, controlling the target object to execute an opening operation, and updating the image database according to the frame sequence to be recognized.
In one possible implementation, the apparatus further includes a state changing unit, and the state changing unit is configured to: and changing the state of the target object according to preset operation.
Application scenario example
For cabinets with storage functions, such as storage cabinets, deposit cabinets, express delivery cabinets and the like, a user can control the cabinets to execute corresponding operations by selecting storage or pickup during operation. However, in some scenarios, for a user in a long storage scenario, the user only wants to take a certain item when opening the express cabinet, or temporarily stores a certain item in the compartment, but does not use the compartment after the user wants to take the item, and at this time, the user cannot determine whether to select "storage" or "retrieval" operation.
Therefore, one control method can be suitable for various scenes, and has high practicability.
Fig. 4 to 7 are schematic diagrams illustrating an application example according to the present disclosure, and as shown in the drawings, the embodiment of the present disclosure proposes a control method based on image recognition. As shown in the drawing, in the application example of the present disclosure, taking controlling the opening of an express delivery cabinet as an example, a specific process of the control method is described:
as can be seen from fig. 4, the express cabinet in the application example of the present disclosure only shows one "open cabinet" button, and when the user clicks the "open cabinet" button before arriving at the screen, the control end may receive the control request, and at this time, a camera on the express cabinet may start to collect an image of the user. The display content of the express delivery cabinet in the collection process is shown in fig. 5.
In an application example of the present disclosure, after an image of a user is acquired, a plurality of groups of preset quality parameters may be set to perform quality screening on the acquired image of the user, so as to exclude image frames with lower quality, and leave a plurality of image frames with higher quality and gradually increasing quality. The setting interface for presetting the quality parameters can refer to fig. 2 provided in the above-mentioned disclosed embodiment, and details are not described here.
In the application example of the present disclosure, parameters such as a face inclination angle, a face side angle, a face pitch angle, a shielding rate, a living body detection rate, a blurring degree, a face size and the like are set, so that all image frames with angles which do not meet requirements, are shielded more, do not meet living body detection, or are blurred and have sizes which do not meet the requirements in the collected images are excluded, and the image frames which meet the requirements are retained.
After the image frames with higher quality are obtained, the image frames with higher quality can be used for comparing with the images in the face bottom library one by one, if the similarity between the images in the face bottom library and the acquired image frames exceeds a threshold value, the fact that the face of the current user is stored in the face bottom library can be indicated, if the similarity between the images in the face bottom library and the acquired image frames does not exceed the threshold value, the user images are acquired again, and quality screening and comparison operations are performed again. If after comparing M times (M may be set by itself, and may be set to 5 in the application example of the present disclosure), an image with similarity exceeding a threshold value to the user is still not found in the face base, it may be stated that the face of the user is not stored in the base.
In the application example of the disclosure, when the above process is executed, the process of image acquisition and face detection screening can be completed in parallel by using the computing power of the multi-core CPU/GPU, so as to improve the efficiency of the control process; meanwhile, the images can be cached according to a certain time interval when being collected, and then quality screening and comparison of the images are executed concurrently, so that the difference of the images among frames is improved, the accuracy of the comparison result of the images is improved, and the control precision is improved.
In the application example of the present disclosure, according to whether the acquired image of the user is stored in the face base, the storing or retrieving operation may be correspondingly performed, specifically, when the image of the user is not stored in the face base, it may be indicated that the user is a new user at this time and needs to perform the storing operation, and therefore, one express cabinet that is not currently used may be randomly opened to help the user store and send a storage prompt to the user, and the effect is shown in fig. 6. Meanwhile, the collected images of the user can be stored in a human face bottom library and bound with the opened express cabinet grids.
When the image of the user is in the face base, it can be shown that the user has used the express delivery cabinet at this time, and the fetching operation is likely to be executed, so that the express delivery cabinet lattice bound with the user can be opened, the use information of the user can be displayed to the user, and the display effect is as shown in fig. 7.
Through the process, two operations of storing and taking can be directly provided for a user through one button, the convenience degree of the user in use is improved, meanwhile, a plurality of groups of preset quality parameters are set, frame images are screened, images with the best current quality are selected to be compared with a face library, the rejection rate caused by different cameras or environments is effectively reduced, and the recognition and control accuracy is effectively improved; in addition, the preset quality parameters can be flexibly adjusted through the interface, so that control personnel can adjust the parameters to the optimal setting value under the current environment more simply and intuitively.
In addition, the control method proposed in the embodiment of the present disclosure may also be applied to control of other objects, and is not limited to the application example disclosed above.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 8 is a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 8, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related personnel information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 9 is a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 9, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can execute computer-readable program instructions to implement various aspects of the present disclosure by utilizing state personnel information of the computer-readable program instructions to personalize the electronic circuitry.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A control method based on image recognition is characterized by comprising the following steps:
under the condition of receiving a control request, acquiring a frame sequence to be identified;
performing image matching with an image database according to the frame sequence to be recognized to obtain a matching result;
selecting a target object from a candidate object set according to the matching result;
and controlling the target object to execute preset operation.
2. The method of claim 1, wherein before performing image matching with an image database according to the sequence of frames to be recognized and obtaining a matching result, the method further comprises:
acquiring a preset quality parameter;
judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to the preset quality parameter to obtain a judgment result;
if the judgment result is yes, performing image matching according to the frame sequence to be identified to obtain a matching result;
and returning to the step of collecting the frame sequence to be identified under the condition that the judgment result is negative.
3. The method according to claim 2, wherein the determining whether the quality of the frame sequence to be identified meets the quality requirement according to the preset quality parameter to obtain a determination result comprises:
selecting at least one frame to be identified from the sequence of frames to be identified according to the preset quality parameter;
and judging whether the quality of the frame sequence to be identified meets the quality requirement or not according to each frame to be identified to obtain a judgment result.
4. The method according to claim 3, wherein the determining whether the quality of the frame sequence to be identified meets the quality requirement according to each frame to be identified to obtain a determination result comprises:
sequentially calculating a first quality score of each frame to be identified according to the sequence of the frame sequence to be identified;
obtaining whether the judgment result is negative under the condition that the first quality fraction is smaller than a first quality threshold value;
and obtaining that the judgment result is yes under the condition that the first quality fraction of each frame to be identified is not less than a first quality threshold.
5. The method according to claim 3, wherein the determining whether the quality of the frame sequence to be identified meets the quality requirement according to each frame to be identified to obtain a determination result comprises:
respectively calculating a second quality score of each frame to be identified;
obtaining a third quality score of the frame sequence to be identified according to each second quality score;
obtaining whether the judgment result is negative under the condition that the third quality fraction is smaller than a second quality threshold value;
and if the third quality fraction is not less than a second quality threshold, obtaining that the judgment result is yes.
6. The method according to any one of claims 2 to 5, wherein the preset quality parameters comprise: one or more of an orientation parameter, an angle parameter, an occlusion rate parameter, a sharpness parameter, a living confidence parameter, a size parameter, and a similarity parameter.
7. The method according to any one of claims 2 to 6, wherein the preset quality parameter and the adjustment interface of the preset quality parameter are displayed through a preset parameter display interface.
8. An image recognition-based control device, comprising:
the frame sequence acquisition module is used for acquiring a frame sequence to be identified under the condition of receiving the control request;
the matching module is used for carrying out image matching with an image database according to the frame sequence to be recognized to obtain a matching result;
the selecting module is used for selecting a target object from the candidate object set according to the matching result;
and the control module is used for controlling the target object to execute preset operation.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 7.
10. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 7.
CN201911045426.8A 2019-10-30 2019-10-30 Control method and device based on image recognition, electronic equipment and storage medium Pending CN110796094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911045426.8A CN110796094A (en) 2019-10-30 2019-10-30 Control method and device based on image recognition, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911045426.8A CN110796094A (en) 2019-10-30 2019-10-30 Control method and device based on image recognition, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110796094A true CN110796094A (en) 2020-02-14

Family

ID=69442205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911045426.8A Pending CN110796094A (en) 2019-10-30 2019-10-30 Control method and device based on image recognition, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110796094A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353434A (en) * 2020-02-28 2020-06-30 北京市商汤科技开发有限公司 Information identification method, device, system, electronic equipment and storage medium
CN111680668A (en) * 2020-08-11 2020-09-18 深圳诚一信科技有限公司 Identification method and system applied to express cabinet and express cabinet
CN112306610A (en) * 2020-11-02 2021-02-02 北京字节跳动网络技术有限公司 Terminal control method and device and electronic equipment
CN112862802A (en) * 2021-02-26 2021-05-28 中国人民解放军93114部队 Location identification method based on edge appearance sequence matching
CN113095289A (en) * 2020-10-28 2021-07-09 重庆电政信息科技有限公司 Massive image preprocessing network method based on urban complex scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6353829B1 (en) * 1998-12-23 2002-03-05 Cray Inc. Method and system for memory allocation in a multiprocessing environment
CN107590452A (en) * 2017-09-04 2018-01-16 武汉神目信息技术有限公司 A kind of personal identification method and device based on gait and face fusion
CN109145842A (en) * 2018-08-29 2019-01-04 深圳市智莱科技股份有限公司 The method and device of chamber door based on image recognition control Intelligent storage cabinet
CN109637040A (en) * 2018-12-28 2019-04-16 深圳市丰巢科技有限公司 A kind of express delivery cabinet pickup method, apparatus, express delivery cabinet and storage medium
CN110211302A (en) * 2019-04-18 2019-09-06 江苏图云智能科技发展有限公司 The control method and device of self-service storage cabinet

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6353829B1 (en) * 1998-12-23 2002-03-05 Cray Inc. Method and system for memory allocation in a multiprocessing environment
CN107590452A (en) * 2017-09-04 2018-01-16 武汉神目信息技术有限公司 A kind of personal identification method and device based on gait and face fusion
CN109145842A (en) * 2018-08-29 2019-01-04 深圳市智莱科技股份有限公司 The method and device of chamber door based on image recognition control Intelligent storage cabinet
CN109637040A (en) * 2018-12-28 2019-04-16 深圳市丰巢科技有限公司 A kind of express delivery cabinet pickup method, apparatus, express delivery cabinet and storage medium
CN110211302A (en) * 2019-04-18 2019-09-06 江苏图云智能科技发展有限公司 The control method and device of self-service storage cabinet

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353434A (en) * 2020-02-28 2020-06-30 北京市商汤科技开发有限公司 Information identification method, device, system, electronic equipment and storage medium
CN111680668A (en) * 2020-08-11 2020-09-18 深圳诚一信科技有限公司 Identification method and system applied to express cabinet and express cabinet
CN113095289A (en) * 2020-10-28 2021-07-09 重庆电政信息科技有限公司 Massive image preprocessing network method based on urban complex scene
CN112306610A (en) * 2020-11-02 2021-02-02 北京字节跳动网络技术有限公司 Terminal control method and device and electronic equipment
CN112862802A (en) * 2021-02-26 2021-05-28 中国人民解放军93114部队 Location identification method based on edge appearance sequence matching

Similar Documents

Publication Publication Date Title
TWI775091B (en) Data update method, electronic device and storage medium thereof
CN110796094A (en) Control method and device based on image recognition, electronic equipment and storage medium
US9667774B2 (en) Methods and devices for sending virtual information card
CN106331504B (en) Shooting method and device
CN107944409B (en) Video analysis method and device capable of distinguishing key actions
CN108985176B (en) Image generation method and device
US20170034325A1 (en) Image-based communication method and device
CN109934275B (en) Image processing method and device, electronic equipment and storage medium
JP2021531554A (en) Image processing methods and devices, electronic devices and storage media
CN110781957A (en) Image processing method and device, electronic equipment and storage medium
CN106534951B (en) Video segmentation method and device
CN112101238A (en) Clustering method and device, electronic equipment and storage medium
RU2645282C2 (en) Method and device for calling via cloud-cards
US10083346B2 (en) Method and apparatus for providing contact card
EP3261046A1 (en) Method and device for image processing
CN111523346B (en) Image recognition method and device, electronic equipment and storage medium
US11551465B2 (en) Method and apparatus for detecting finger occlusion image, and storage medium
EP3040912A1 (en) Method and device for classifying pictures
CN109101542B (en) Image recognition result output method and device, electronic device and storage medium
CN111242303A (en) Network training method and device, and image processing method and device
CN109344703B (en) Object detection method and device, electronic equipment and storage medium
CN110929545A (en) Human face image sorting method and device
CN111062407B (en) Image processing method and device, electronic equipment and storage medium
CN111797746A (en) Face recognition method and device and computer readable storage medium
CN111832455A (en) Method, device, storage medium and electronic equipment for acquiring content image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214