CN112308018A - Image identification method, system, electronic equipment and storage medium - Google Patents

Image identification method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN112308018A
CN112308018A CN202011307535.5A CN202011307535A CN112308018A CN 112308018 A CN112308018 A CN 112308018A CN 202011307535 A CN202011307535 A CN 202011307535A CN 112308018 A CN112308018 A CN 112308018A
Authority
CN
China
Prior art keywords
image
close
panoramic
preset
panoramic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011307535.5A
Other languages
Chinese (zh)
Inventor
高海超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Hongcheng Opto Electronics Co Ltd
Original Assignee
Anhui Hongcheng Opto Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Hongcheng Opto Electronics Co Ltd filed Critical Anhui Hongcheng Opto Electronics Co Ltd
Priority to CN202011307535.5A priority Critical patent/CN112308018A/en
Priority to PCT/CN2020/141047 priority patent/WO2022105027A1/en
Publication of CN112308018A publication Critical patent/CN112308018A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

One or more embodiments of the present specification provide an image recognition method, system, electronic device, and storage medium, including: acquiring a panoramic image and a close-up image, wherein the panoramic image comprises a plurality of objects, and the close-up image comprises a preset object; comparing the close-up image with the panoramic image, and determining a corresponding area corresponding to the close-up image in the panoramic image; performing image recognition on the close-up image, and determining the recognition information of a preset object; and determining the recognition result of the object in the corresponding area according to the corresponding relation between the corresponding area and the close-up image based on the recognition information. In one or more embodiments of the present disclosure, a panoramic image and a close-up image of a specific object are respectively obtained for a scene, and a relationship between the close-up image and the panoramic image is established, so that an unclear part is identified in the panoramic image by using the close-up image, and further, the problems of fuzzy shooting of a distant object by a panoramic camera in a large scene, image identification inconvenience, and high identification error rate are solved.

Description

Image identification method, system, electronic equipment and storage medium
Technical Field
One or more embodiments of the present disclosure relate to the field of image recognition technologies, and in particular, to an image recognition method, an image recognition system, an electronic device, and a storage medium.
Background
With the development of modern imaging technology, image recognition technology has become popular and is flooded in every corner of life. The purpose is mainly to determine the identity of the object image or the object displayed in the object area more accurately and quickly in various fields, such as: face recognition, contraband recognition, and the like.
However, the prior art has a plurality of problems in the field of image recognition. Taking face recognition as an example, when object recognition is performed on a plurality of objects in a large scene, due to the influence of special reasons such as distance and focusing position, a specific object in a panoramic image shot by using a panoramic camera may be blurred, which is not beneficial to image recognition analysis, and thus, a problem of recognition failure may be caused.
Disclosure of Invention
In view of the above, one or more embodiments of the present disclosure are directed to an image recognition method, an image recognition system, an electronic device, and a storage medium, so as to solve the problem that a success rate of recognizing a specific object in a panoramic image captured by a panoramic camera is not high.
In view of the above object, one or more embodiments of the present specification provide an image recognition method including:
acquiring a panoramic image and a close-up image, wherein the panoramic image comprises a plurality of objects, the close-up image comprises a preset object, and the preset object is an object meeting a preset condition in the plurality of objects;
comparing the close-up image with the panoramic image, and determining a corresponding area corresponding to the close-up image in the panoramic image, wherein the corresponding area comprises the preset object;
performing image recognition on the close-up image, and determining the recognition information of the preset object;
and determining the recognition result of the object in the corresponding area according to the corresponding relation between the corresponding area and the close-up image on the basis of the recognition information.
In some embodiments, said comparing said close-up image with said panoramic image, and determining a corresponding region of said panoramic image corresponding to said close-up image, comprises:
determining a comparison area of the close-up image and a comparison area of the panoramic image;
carrying out similarity comparison on the comparison area of the close-up image and the comparison area of the panoramic image;
and if the similarity comparison accords with the set conditions, taking the comparison area of the panoramic image as the corresponding area.
In some embodiments, the image recognizing the close-up image and determining the identification information of the preset object includes:
and performing feature recognition on the close-up image, determining the identity information of the preset object corresponding to the close-up image, and taking the identity information as the recognition information.
In some embodiments, the preset object is specifically:
and the distance from the plurality of objects to the acquisition position of the panoramic image is greater than a preset threshold value.
In some embodiments, the preset object is specifically:
and the distance from the plurality of objects to the focus area of the panoramic image is larger than a preset threshold value.
In some embodiments, the preset object is specifically:
among the plurality of objects, an object whose sharpness is smaller than a preset threshold in the panoramic image.
In some embodiments, the panoramic image and the close-up image are captured during the same time period.
Based on the same concept, one or more embodiments of the present specification further provide an image recognition system, including: the system comprises a panoramic camera, a close-up camera and a processor; wherein the content of the first and second substances,
the panoramic camera is configured to acquire a panoramic image, and the panoramic image comprises a plurality of objects;
the close-up camera is configured to acquire a close-up image, wherein the close-up image comprises a preset object, and the preset object is an object meeting a preset condition in the plurality of objects;
the processor configured to perform the method of any of the above.
Based on the same concept, one or more embodiments of the present specification further provide an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the method according to any one of the above when executing the program.
Based on the same concept, one or more embodiments of the present specification also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of any one of the above.
As can be seen from the above description, one or more embodiments of the present specification provide an image recognition method, system, electronic device, and storage medium, including: acquiring a panoramic image and a close-up image, wherein the panoramic image comprises a plurality of objects, the close-up image comprises a preset object, and the preset object is an object meeting a preset condition in the plurality of objects; comparing the close-up image with the panoramic image, and determining a corresponding area corresponding to the close-up image in the panoramic image, wherein the corresponding area comprises a preset object; performing image recognition on the close-up image, and determining the recognition information of a preset object; and determining the recognition result of the object in the corresponding area according to the corresponding relation between the corresponding area and the close-up image based on the recognition information. In one or more embodiments of the present disclosure, a panoramic image and a close-up image of a specific object are respectively obtained for a scene, and a relationship between the close-up image and the panoramic image is established, so that an unclear part is identified in the panoramic image by using the close-up image, and further, the problems of fuzzy shooting of a distant object by a panoramic camera in a large scene, image identification inconvenience, and high identification error rate are solved.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the following description are only one or more embodiments of the present specification, and that other drawings may be obtained by those skilled in the art without inventive effort from these drawings.
Fig. 1 is a schematic flow chart of an image recognition method according to one or more embodiments of the present disclosure;
fig. 2 is a schematic structural diagram of an electronic device according to one or more embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present specification more apparent, the present specification is further described in detail below with reference to the accompanying drawings in combination with specific embodiments.
It should be noted that technical terms or scientific terms used in the embodiments of the present specification should have a general meaning as understood by those having ordinary skill in the art to which the present disclosure belongs, unless otherwise defined. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that a element, article, or method step that precedes the word, and includes the element, article, or method step that follows the word, and equivalents thereof, does not exclude other elements, articles, or method steps. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
As described in the background section, in a large scene, large space, for example: in large classrooms, large storage rooms, etc., identification of objects therein is generally achieved by taking a panoramic image encompassing the entire space with a panoramic camera and performing further image identification on the panoramic image. When the object is identified, due to special reasons such as distance and focusing position, a specific person or object (for example, a person at the back row or far end) is small or fuzzy in the panoramic image, which is not beneficial to the identification of the image, easily causes the condition of identification error or identification failure, and greatly affects the accuracy of the image identification.
In combination with the above practical situations, one or more embodiments of the present specification provide an image recognition scheme, where a panoramic image and a close-up image of a specific object are respectively obtained for a scene, and a relationship between the close-up image and the panoramic image is established, so that an unclear part is recognized in the panoramic image by using the close-up image, and further, the problems that a long-distance object is captured by a panoramic camera in a large scene, the image recognition is not facilitated, and the recognition error rate is high are solved.
Referring to fig. 1, a schematic flow chart of an image recognition method according to an embodiment of the present disclosure specifically includes the following steps:
step 101, acquiring a panoramic image and a close-up image, wherein the panoramic image comprises a plurality of objects, the close-up image comprises preset objects, and the preset objects are the objects meeting preset conditions in the plurality of objects.
This step aims at obtaining a panoramic image, and a close-up image of a preset object. The panoramic image is an image of all areas including a specific place, and can be acquired through a panoramic camera arranged in the specific place or formed by splicing images after the specific place is shot through a plurality of cameras; a specific place is a specific scene needing to identify things therein, such as: classrooms, storage rooms, etc., which may be all or a particular portion of a particular scene, such as a classroom, which may be the entire classroom, an area of the classroom other than a podium, etc. The object is a figure such as a student in a classroom or an article such as a painting in a storage room. The close-up image is a clear image of the preset object acquired by the close-up camera and the like, and can be obtained by shooting each preset object independently or uniformly acquiring all the preset objects, for example, a classroom, the close-up image can be an independent image of each remote student, an image of all the remote students, and the like.
In some application scenarios, the preset object for which the close-up image is intended may be specifically adjusted according to the specific application scenario. For example, the preset object in a specific application scenario specifically includes: the distance from the plurality of objects to the acquisition position of the panoramic image is larger than a preset threshold value; the preset object in another specific application scenario specifically is: objects, among the plurality of objects, whose distances to a focus area of the panoramic image are greater than a preset threshold; the preset object in another specific application scenario specifically is: among the objects, an object whose sharpness is smaller than a preset threshold in the panoramic image, and the like. It may be more a close-up image of a specific object designated in advance.
In some application scenarios, the acquired panoramic image and the acquired close-up image may be acquired sequentially or simultaneously. And in order to make the posture of a single target in the close-up image consistent with the posture of the target in the panoramic image as much as possible, the similarity degree of the close-up image and the partial region in the panoramic image is improved, so that the similarity comparison in the following steps is better performed, and the comparison success rate is improved. Preferably, the panoramic image and the close-up image are acquired during the same time period.
The two images acquired in the same time period have approximately the same content, so that the image identification can be very easily performed.
And 102, comparing the close-up image with the panoramic image, and determining a corresponding area corresponding to the close-up image in the panoramic image, wherein the corresponding area comprises the preset object.
The step aims to compare the panoramic image according to the close-up image and determine the part of the panoramic image corresponding to the close-up image. According to step 101, the close-up image is an image of a single object in the identified scene or an image of an area where all objects satisfying the condition are located, and the panoramic image is an image reflecting the entire view of the identified scene, and the close-up image must have mutually corresponding parts in the panoramic image. Therefore, the relationship between the two images can be established through an image recognition comparison technology, for example, face recognition is carried out on the close-up image, feature extraction is carried out, image recognition is carried out on the panoramic image by using the extracted features, and the part with the similarity higher than a certain threshold value is determined as the part corresponding to the close-up image.
The corresponding area is an area capable of reflecting specific features of the image, such as a face area in face recognition, a specific portrait area in painting recognition, specific features of contraband in contraband recognition, and the like. In some application scenarios, the corresponding area is a concentrated area containing specific features, for example, a frame-shaped area in which human faces are concentrated and distributed in the panoramic image in human face recognition; in other application scenarios, the corresponding region is a single region where each specific feature is located, for example, in face recognition, a frame region where each face is located in the panoramic image is a corresponding region. Then, since the close-up image is for the eligible preset objects, it may be that one image covers all the eligible preset objects, or that each eligible preset object is acquired separately. And the close-up image also contains the specific characteristics of each preset object, so that the close-up image and the panoramic image can be compared and identified to establish the corresponding relation of the close-up image and the corresponding area in the panoramic image. Here, the correspondence relationship may be established between the entire close-up image and the corresponding region in the panoramic image, or between the comparison region in the close-up image and the corresponding comparison region in the panoramic image. For example: in a classroom face recognition scene, when a close-up image is an image comprising all students meeting the conditions, the corresponding relationship established between the close-up image and a panoramic image can be the relationship established between the whole close-up image and a corresponding area in the panoramic image, or the relationship established between the face area of each student in the close-up image and the corresponding face area in the panoramic image, and the like; when the close-up image is only an image of a student meeting the conditions, the close-up image and the panoramic image establish a corresponding relationship, such as a corresponding relationship between the face area of the student and the corresponding face area in the panoramic image.
Here, because the close-up image and the panoramic image can be shot at the same time or at an extremely short interval, the postures and features of the same person or object reflected by the two images are highly similar, and the accuracy of the result corresponding to the identification of the two images is very high, which is much higher than the accuracy of the identification of the two images through the pre-stored images in the identification library. Taking face recognition of students in a classroom as an example, images such as identification photographs of students may be stored in a pre-storage recognition library, and when close-up images and panoramic images are shot at the same time or at short intervals, all features of the faces of the students, which are reflected in the two images, are almost equal, so that the accuracy rate of recognizing the panoramic images directly through the close-up images is far higher than the accuracy rate of recognizing the two images respectively through the pre-storage recognition library.
In some application scenes, in order to accurately determine the corresponding area corresponding to the close-up image in the panoramic image, only the specific required area is identified, and the operation amount is reduced. The step of comparing the close-up image with the panoramic image to determine a corresponding area in the panoramic image corresponding to the close-up image comprises the following steps: determining a comparison area of the close-up image and a comparison area of the panoramic image; carrying out similarity comparison on the comparison area of the close-up image and the comparison area of the panoramic image; and if the similarity comparison accords with the set conditions, taking the comparison area of the panoramic image as the corresponding area.
The comparison area of the close-up image is an area reflecting specific features of the object in the close-up image, such as a face area of each student in a classroom scene, a specific feature area of each article in a storage room scene (e.g., a main body part of a drawn picture), and the like. In order to reduce the amount of computation, only a specific region is identified without identifying the entire image or the entire region, and in some application scenarios, the feature region (each face region) of each object in the close-up image is determined again, and only each feature region in the close-up image and the panoramic image is identified without identifying the entire image.
Thereafter, the setting condition may be a condition such as the degree of similarity not less than 90.
And 103, carrying out image recognition on the close-up image, and determining the recognition information of the preset object.
The step aims to identify the close-up image and determine the identification information of the preset object corresponding to the close-up image. The identification information may be information for identifying a preset object or an identity of a corresponding area, and meanwhile, the identification information may also be only quantity statistical information, for example, if 5 preset objects are identified in the close-up image, the identification information may be 5 people.
The image recognition in this step is similar to the image comparison in step 102, except that the database of comparisons is replaced, where a preset standard image library is used, in which standard feature images of all objects are stored. And performing image recognition on the close-up image based on the standard image library, and determining specific identity information of the preset object contained in the close-up image, for example, performing face recognition on the close-up image through a face recognition technology, and determining identification information such as identity information of persons corresponding to a single face or all faces in the close-up image. Further, in some application scenarios, the image recognizing the close-up image and determining the identification information of the preset object includes: and performing feature recognition on the close-up image, determining the identity information of the preset object corresponding to the close-up image, and taking the identity information as the recognition information.
And 104, determining the recognition result of the object in the corresponding area according to the corresponding relation between the corresponding area and the close-up image based on the recognition information.
The step aims to correspond the identification information of the preset object in the close-up image to the corresponding object in the panoramic image according to the corresponding relation, so that the identification result of the object in the panoramic image is determined. The recognition result is similar to the recognition information, and is information that can identify the identity of each object or each corresponding area.
According to the corresponding relationship determined in step 102, a mapping relationship between the preset object in the close-up image and an object in the panoramic image can be directly established, and according to the identification information of the preset object determined in step 103, an identification result of the object in the panoramic image can be directly determined according to mapping, that is, the identification result of the object in the panoramic image can be the identity of the object in the panoramic image, for example: name, school number, identification card number, etc. of the student.
Therefore, the identity information of a specific object or an unclear object in the panoramic image can be determined, then, the object which can be clearly distinguished in the panoramic image can be directly identified according to image identification and other similar methods, and finally, the identification result of the whole panoramic image can be determined. The recognition result may be information reflecting the identity or number of objects in the recognition scene, such as identity information of each student in the classroom or number information of students in the classroom, related identity information of paintings in the storage room, number information, and the like.
Finally, the recognition result can be output for storage, display or reprocessing. According to different application scenes and implementation requirements, the specific output mode of the recognition result can be flexibly selected.
For example, for an application scenario in which the method of the present embodiment is executed on a single device, the recognition result may be directly output in a display manner on a display section (display, projector, etc.) of the current device, so that the operator of the current device can directly see the content of the recognition result from the display section.
For another example, for an application scenario executed on a system composed of multiple devices by the method of this embodiment, the identification result may be sent to other preset devices as recipients in the system through any data communication manner (wired connection, NFC, bluetooth, wifi, cellular mobile network, etc.), so that the preset device receiving the identification result may perform subsequent processing on the preset device. Optionally, the preset device may be a preset server, and the server is generally arranged at a cloud end and used as a data processing and storage center, which can store and distribute the recognition result; the recipient of the distribution is a terminal device, and the holders or operators of the terminal devices may be current users, monitoring personnel related to identifying scenes, units related to identifying objects in scenes, individuals, and the like.
For another example, for an application scenario executed on a system composed of multiple devices, the method of this embodiment may directly send the recognition result to a preset terminal device through any data communication manner, where the terminal device may be one or more of the foregoing paragraphs.
In a specific application scene, taking a classroom as an example for identifying scenes, a panoramic image is shot through a panoramic lens, a close-up image is shot through a close-up lens, and the panoramic lens can scan 30 persons and the close-up lens can scan 9 persons assuming that 30 students all have the same scene.
Face detection is performed on the close-up pictures, and a set of Rect1 (rectangular regions) for detecting faces is obtained, and if 9 images are detected, the images are recorded as Rect11, Rect12, … and Rect 19.
Extracting feature values of Rect 11-Rect 19, and registering the extracted feature values together with numbers No 1-No 9 in the face recognition model. The face recognition is carried out on the panoramic picture, because the panoramic picture is an image at the same moment or at a close moment, the similarity matching between the face of the panoramic picture and the face in the close-up image is very high, and the matching face with the similarity of more than 90% is extracted as a panoramic face set Rect2, which comprises the following steps: each human face corresponds to a number No 1-No 9, and each human face corresponds to Rect21, Rect22, … and Rect 29. Wherein the numbers in Rect2 and the numbers in Rect1 are in one-to-one correspondence.
And reloading the preset face library sample characteristic values of 30 students to the face recognition model, wherein the characteristic values correspond to the names of the persons. And performing face recognition on the close-up face to recognize a face set RECT3(Rect31, Rect32, … and Rect39) and corresponding person names (name1, name2, … and name 9). And further corresponds to the collection of Rect1, and the No1 of Rect11 corresponds to the No1 of Rect21, so that the number can find that Rect11 corresponds to Rect21, and further corresponds to the collection of Rect 2.
Rect2 is a Rect in a panoramic image, and thus corresponds the face name9 recognized in a close-up to the panoramic shot.
And then recognizing the faces of other areas of the panoramic image and integrating the faces into a recognition result of students in the whole classroom.
An image recognition method provided by applying one or more embodiments of the present specification includes: acquiring a panoramic image and a close-up image, wherein the panoramic image comprises a plurality of objects, the close-up image comprises a preset object, and the preset object is an object meeting a preset condition in the plurality of objects; comparing the close-up image with the panoramic image, and determining a corresponding area corresponding to the close-up image in the panoramic image, wherein the corresponding area comprises a preset object; performing image recognition on the close-up image, and determining the recognition information of a preset object; and determining the recognition result of the object in the corresponding area according to the corresponding relation between the corresponding area and the close-up image based on the recognition information. In one or more embodiments of the present disclosure, a panoramic image and a close-up image of a specific object are respectively obtained for a scene, and a relationship between the close-up image and the panoramic image is established, so that an unclear part is identified in the panoramic image by using the close-up image, and further, the problems of fuzzy shooting of a distant object by a panoramic camera in a large scene, image identification inconvenience, and high identification error rate are solved.
It should be noted that the method of one or more embodiments of the present disclosure may be performed by a single device, such as a computer or server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the devices may perform only one or more steps of the method of one or more embodiments of the present disclosure, and the devices may interact with each other to complete the method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same inventive concept, one or more embodiments of the present specification further provide an image recognition system, including a panoramic camera, a close-up camera, and a processor; wherein the content of the first and second substances,
the panoramic camera is configured to acquire a panoramic image, and the panoramic image comprises a plurality of objects;
the close-up camera is configured to acquire a close-up image, wherein the close-up image comprises a preset object, and the preset object is an object meeting a preset condition in the plurality of objects;
the processor is configured to perform an image recognition method as described in any of the above embodiments.
The system of the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
One or more embodiments of the present specification further provide an electronic device based on the same inventive concept. The electronic device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to implement the image recognition method according to any one of the above embodiments.
Fig. 2 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the electronic device may include: a processor 210, a memory 220, an input/output interface 230, a communication interface 240, and a bus 250. Wherein the processor 210, the memory 220, the input/output interface 230 and the communication interface 240 are communicatively coupled to each other within the device via a bus 250.
The processor 210 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present specification.
The Memory 220 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 220 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 220 and called to be executed by the processor 210.
The input/output interface 230 is used for connecting an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 240 is used for connecting a communication module (not shown in the figure) to implement communication interaction between the present device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 250 includes a pathway to transfer information between various components of the device, such as processor 210, memory 220, input/output interface 230, and communication interface 240.
It should be noted that although the above-mentioned device only shows the processor 210, the memory 220, the input/output interface 230, the communication interface 240 and the bus 250, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The device of the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, one or more embodiments of the present specification also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform an image recognition method as described in any of the embodiments above.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the spirit of the present disclosure, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of different aspects of one or more embodiments of the present description as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures, for simplicity of illustration and discussion, and so as not to obscure one or more embodiments of the disclosure. Further, devices may be shown in block diagram form in order to avoid obscuring the understanding of one or more embodiments of the present description, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the one or more embodiments of the present description are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that one or more embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
It is intended that the one or more embodiments of the present specification embrace all such alternatives, modifications and variations as fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of one or more embodiments of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. An image recognition method, comprising:
acquiring a panoramic image and a close-up image, wherein the panoramic image comprises a plurality of objects, and the close-up image comprises a preset object; the preset object is an object meeting a preset condition in the plurality of objects;
comparing the close-up image with the panoramic image, and determining a corresponding area corresponding to the close-up image in the panoramic image, wherein the corresponding area comprises the preset object;
performing image recognition on the close-up image, and determining the recognition information of the preset object;
and determining the recognition result of the object in the corresponding area according to the corresponding relation between the corresponding area and the close-up image on the basis of the recognition information.
2. The method of claim 1, wherein said comparing said close-up image with said panoramic image to determine a corresponding region of said panoramic image corresponding to said close-up image comprises:
determining a comparison area of the close-up image and a comparison area of the panoramic image;
carrying out similarity comparison on the comparison area of the close-up image and the comparison area of the panoramic image;
and if the similarity comparison accords with the set conditions, taking the comparison area of the panoramic image as the corresponding area.
3. The method of claim 1, wherein the image recognizing the close-up image, determining the identification information of the preset object, comprises:
and performing feature recognition on the close-up image, determining the identity information of the preset object corresponding to the close-up image, and taking the identity information as the recognition information.
4. The method according to claim 1, wherein the preset objects are, in particular:
and the distance from the plurality of objects to the acquisition position of the panoramic image is greater than a preset threshold value.
5. The method according to claim 1, wherein the preset objects are, in particular:
and the distance from the plurality of objects to the focus area of the panoramic image is larger than a preset threshold value.
6. The method according to claim 1, wherein the preset objects are, in particular:
among the plurality of objects, an object whose sharpness is smaller than a preset threshold in the panoramic image.
7. A method as claimed in any one of claims 1 to 6, wherein the panoramic image and the close-up image are acquired during the same time period.
8. An image recognition system, comprising: the system comprises a panoramic camera, a close-up camera and a processor; wherein the content of the first and second substances,
the panoramic camera is configured to acquire a panoramic image, and the panoramic image comprises a plurality of objects;
the close-up camera is configured to acquire a close-up image, wherein the close-up image comprises a preset object, and the preset object is an object meeting a preset condition in the plurality of objects;
the processor configured to perform the method of any of claims 1 to 7.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the program.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202011307535.5A 2020-11-19 2020-11-19 Image identification method, system, electronic equipment and storage medium Withdrawn CN112308018A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011307535.5A CN112308018A (en) 2020-11-19 2020-11-19 Image identification method, system, electronic equipment and storage medium
PCT/CN2020/141047 WO2022105027A1 (en) 2020-11-19 2020-12-29 Image recognition method and system, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011307535.5A CN112308018A (en) 2020-11-19 2020-11-19 Image identification method, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112308018A true CN112308018A (en) 2021-02-02

Family

ID=74336026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011307535.5A Withdrawn CN112308018A (en) 2020-11-19 2020-11-19 Image identification method, system, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112308018A (en)
WO (1) WO2022105027A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630396A (en) * 2021-12-31 2022-06-14 厦门阳光恩耐照明有限公司 Intelligent lamp Bluetooth configuration method and system based on image recognition
CN115474076A (en) * 2022-08-15 2022-12-13 珠海视熙科技有限公司 Video stream image output method and device and camera equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105554390A (en) * 2015-12-29 2016-05-04 广东欧珀移动通信有限公司 Shooting method and device, selfie stick, shooting system and mobile terminal
CN105812746B (en) * 2016-04-21 2019-05-10 北京格灵深瞳信息技术有限公司 A kind of object detection method and system
CN109215055A (en) * 2017-06-30 2019-01-15 杭州海康威视数字技术股份有限公司 A kind of target's feature-extraction method, apparatus and application system
CN110648299A (en) * 2018-06-26 2020-01-03 株式会社理光 Image processing method, image processing apparatus, and computer-readable storage medium
CN109299696B (en) * 2018-09-29 2021-05-18 成都臻识科技发展有限公司 Face detection method and device based on double cameras
CN111292278B (en) * 2019-07-30 2023-04-07 展讯通信(上海)有限公司 Image fusion method and device, storage medium and terminal
CN111353361A (en) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 Face recognition method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630396A (en) * 2021-12-31 2022-06-14 厦门阳光恩耐照明有限公司 Intelligent lamp Bluetooth configuration method and system based on image recognition
CN115474076A (en) * 2022-08-15 2022-12-13 珠海视熙科技有限公司 Video stream image output method and device and camera equipment

Also Published As

Publication number Publication date
WO2022105027A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN105765628B (en) The method and system that depth map generates
US20190378294A1 (en) Stereo camera and height acquisition method thereof and height acquisition system
US20130083990A1 (en) Using Videogrammetry to Fabricate Parts
US10015445B1 (en) Room conferencing system with heat map annotation of documents
CN112308018A (en) Image identification method, system, electronic equipment and storage medium
CN110111241B (en) Method and apparatus for generating dynamic image
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN112818933A (en) Target object identification processing method, device, equipment and medium
CN110619807A (en) Method and device for generating global thermodynamic diagram
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
WO2016145831A1 (en) Image acquisition method and device
CN108289176B (en) Photographing question searching method, question searching device and terminal equipment
CN111526341B (en) Monitoring camera
KR20130109777A (en) Apparatus and method for managing attendance based on face recognition
US10535154B2 (en) System, method, and program for image analysis
CN114120382A (en) Testing method and device of face recognition system, electronic equipment and medium
CN113792674B (en) Method and device for determining empty rate and electronic equipment
CN112153320B (en) Method and device for measuring size of article, electronic equipment and storage medium
CN112308017A (en) Image detection method, system, electronic equipment and storage medium
CN108965694B (en) Method for acquiring gyroscope information for camera level correction and portable terminal
US11087121B2 (en) High accuracy and volume facial recognition on mobile platforms
US20160323490A1 (en) Extensible, automatically-selected computational photography scenarios
CN113807150A (en) Data processing method, attitude prediction method, data processing device, attitude prediction device, and storage medium
CN112565586A (en) Automatic focusing method and device
KR20150106621A (en) Terminal and service providing device, control method thereof, computer readable medium having computer program recorded therefor and image searching system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210202

WW01 Invention patent application withdrawn after publication