CN116246061A - Identification area determining method and device and electronic equipment - Google Patents

Identification area determining method and device and electronic equipment Download PDF

Info

Publication number
CN116246061A
CN116246061A CN202310342079.5A CN202310342079A CN116246061A CN 116246061 A CN116246061 A CN 116246061A CN 202310342079 A CN202310342079 A CN 202310342079A CN 116246061 A CN116246061 A CN 116246061A
Authority
CN
China
Prior art keywords
image
identification frame
target
shooting object
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310342079.5A
Other languages
Chinese (zh)
Inventor
陆南宁
杜健聪
唐昊铭
马宜天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leafun Culture Science and Technology Co Ltd
Original Assignee
Guangzhou Leafun Culture Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Leafun Culture Science and Technology Co Ltd filed Critical Guangzhou Leafun Culture Science and Technology Co Ltd
Priority to CN202310342079.5A priority Critical patent/CN116246061A/en
Publication of CN116246061A publication Critical patent/CN116246061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for determining an identification area and electronic equipment, wherein the method is applied to the electronic equipment, and the electronic equipment is respectively in communication connection with a guide display screen and a somatosensory detection device; the method comprises the following steps: receiving a first image and a second image sent by a somatosensory detection device; the first image comprises a shooting object and a guiding display screen; identifying first image coordinates of a shooting object in a first image; identifying a shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result; determining the position relation between the target identification frame and the guide display screen in the first image; and determining display coordinates corresponding to the target identification frame in the guide display screen according to the position relation, and outputting the target identification frame to the guide display screen according to the display coordinates. By implementing the embodiment of the application, the recognition rate of the limb actions of the user can be improved in the somatosensory recognition process.

Description

Identification area determining method and device and electronic equipment
Technical Field
The application relates to the technical field of man-machine interaction, in particular to a method and a device for determining an identification area and electronic equipment.
Background
With the rapid development of computer man-machine interaction technology, somatosensory recognition technology gradually becomes a hot technology of modern man-machine interaction. The somatosensory recognition technology realizes the control of a computer by recognizing the limb actions of a human body. The user can interact with the computer only by making limb actions such as gestures without wearing any sensing equipment. However, the user often does not know the motion sensing range of the motion sensing device, and if the user leaves the motion sensing range, the user's limb motion cannot be recognized in the motion sensing recognition process, so that the recognition rate of the user's limb motion is reduced.
Disclosure of Invention
The embodiment of the application discloses a method and a device for determining an identification area and electronic equipment, which can improve the identification rate of limb actions of a user in a somatosensory identification process.
The embodiment of the application discloses a determination method of an identification area, which is applied to electronic equipment, wherein the electronic equipment is respectively in communication connection with a guide display screen and a somatosensory detection device, the somatosensory detection device at least comprises a first camera and a second camera, the shooting range of the first camera is larger than that of the second camera, and the shooting range of the second camera is a somatosensory detection range corresponding to the somatosensory detection device; the method comprises the following steps:
Receiving a first image and a second image sent by the somatosensory detection device; the first image is acquired by the first camera, the second image is acquired by the second camera, and the first image comprises a shooting object and the guiding display screen;
identifying first image coordinates of the shooting object in the first image;
identifying the shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result;
determining a position relationship between the target identification frame and the guide display screen in the first image;
and determining display coordinates corresponding to the target identification frame in the guide display screen according to the position relation, and outputting the target identification frame to the guide display screen according to the display coordinates.
As an optional implementation manner, the generating the target identification frame in the first image according to the first image coordinate and the first recognition result includes:
if the first identification result indicates that the shooting object is identified in the second image, generating a target identification frame in the first image according to the first image coordinate; the first image coordinates are included in the target identification frame.
As an alternative embodiment, after the receiving the first image and the second image sent by the somatosensory detection device, the method further includes:
outputting an initial identification frame to the guide display screen;
the step of identifying the shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result, including:
and if the shooting object is determined to be in the initial identification frame according to the first image coordinate of the shooting object, and the first identification result indicates that the shooting object is not identified in the second image, adjusting the initial identification frame based on the first image coordinate to obtain the target identification frame.
As an optional implementation manner, if it is determined that the photographic subject is within the initial identification frame according to the first image coordinate of the photographic subject, and the first recognition result indicates that the photographic subject is not recognized in the second image, the adjusting the initial identification frame based on the first image coordinate to obtain the target identification frame includes:
If the shooting object is determined to be on the target boundary of the initial identification frame according to the first image coordinate of the shooting object, and the first identification result indicates that the shooting object is not identified in the second image, moving the target boundary to the center position of the initial identification frame by a target distance to obtain the target identification frame; the target boundary is any boundary of the initial identification frame.
As an optional implementation manner, after the generating the target identifier frame in the first image according to the first image coordinate and the first recognition result, the method further includes:
determining one or more test areas within the target identification frame;
if the first image coordinates of the shooting object in the first image are matched with a target test area, determining that the shooting object is located in the target test area, and detecting the identification duration of the shooting object identified in the acquired second image when the shooting object is located in the target test area; the target test area is any one of the test areas;
determining an optimal recognition area in the target identification frame from the one or more test areas based on recognition time periods of the shooting object in the test areas respectively corresponding to the test areas; and the identification time length corresponding to the optimal identification area is smaller than a time length threshold value.
As an alternative embodiment, the test area includes a central image area of the target identification frame;
the determining, based on the identification duration of the shooting object in each test area, the optimal identification area in the target identification frame from the one or more test areas includes:
if the identification time length corresponding to the shooting object in the central image area of the target identification frame is smaller than the time length threshold value, determining the central image area of the target identification frame as the optimal identification area;
if the identification time length of the shooting object corresponding to the central image area of the target identification frame is greater than the time length threshold value, determining the optimal identification area from the adjacent image areas of the central image area of the target identification frame; the adjacent image area is a test area with a distance from the center image area of the target identification frame less than a distance threshold.
As an optional implementation manner, after the determining, from the one or more test areas, the best recognition area within the target identification frame based on the recognition duration that the photographic subject is in each of the test areas, the method further includes:
If the moving track of the shooting object is determined to move from the outside of the target identification frame to the optimal identification area according to the first image coordinates of the shooting object in the multi-frame first image, then the moving track approaches to any boundary of the target identification frame from the optimal identification area, the shooting object is identified in a second image acquired in the moving process of the shooting object, a second identification result is obtained, and the target identification frame is adjusted according to the second identification result.
The embodiment of the application discloses a determination device of an identification area, which is applied to electronic equipment, wherein the electronic equipment is respectively in communication connection with a guide display screen and a somatosensory detection device, the somatosensory detection device at least comprises a first camera and a second camera, the shooting range of the first camera is larger than that of the second camera, and the shooting range of the second camera is a somatosensory detection range corresponding to the somatosensory detection device; the device comprises:
the receiving module is used for receiving the first image and the second image sent by the somatosensory detection device; the first image is acquired by the first camera, the second image is acquired by the second camera, and the first image comprises a shooting object and the guiding display screen;
The identification module is used for identifying first image coordinates of the shooting object in the first image;
the generation module is used for identifying the shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result;
the determining module is used for determining the position relation between the target identification frame and the guiding display screen in the first image;
and the output module is used for determining the display coordinates corresponding to the target identification frame in the guide display screen according to the position relation and outputting the target identification frame to the guide display screen according to the display coordinates.
The embodiment of the application discloses electronic equipment, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor realizes any one of the identification area determining methods disclosed by the embodiment of the application.
The embodiment of the application discloses a computer readable storage medium storing a computer program, wherein the computer program causes a computer to execute any one of the identification area determining methods disclosed in the embodiment of the application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the electronic equipment is respectively in communication connection with the guide display screen and the somatosensory detection device, and the somatosensory detection device at least comprises a first camera and a second camera, wherein the shooting range of the first camera is larger than that of the second camera, and the shooting range of the second camera is the somatosensory detection range corresponding to the somatosensory detection device; the electronic equipment receives a first image and a second image sent by the somatosensory detection device, wherein the first image is an image acquired by a first camera, the second image is an image acquired by a second camera, and the first image comprises a shooting object and a guiding display screen; the electronic equipment identifies first image coordinates of a shooting object in a first image; the electronic equipment identifies a shooting object in the second image to obtain a first identification result, and generates a target identification frame in the first image according to the first image coordinate and the first identification result; the electronic equipment determines the position relation between the target identification frame and the guide display screen in the first image; and the electronic equipment determines display coordinates corresponding to the target identification frame in the guide display screen according to the position relation, and outputs the target identification frame to the guide display screen according to the display coordinates.
According to the method and the device for detecting the body motion of the user, according to the first identification result of the second image on the shooting object, when the shooting object is located in the first image coordinate in the first image, whether the shooting object is located in the body motion detection range of the second camera is determined, so that a target identification frame is generated in the first image based on the first identification result and the first image coordinate, and the target identification frame is output to the guiding display screen, so that the shooting object is guided to walk into the body motion detection range corresponding to the body motion detection device, and therefore the problem that the body motion of the user cannot be identified because the shooting object is guided out of the body motion detection range of the body motion detection device in the body motion identification process is avoided, and the identification rate of the body motion of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1A is an application scenario schematic diagram of a method for determining an identification area according to an embodiment of the present application;
Fig. 1B is an application scenario schematic diagram of another method for determining an identification area according to an embodiment of the present application;
fig. 1C is a schematic application scenario of another method for determining an identification area according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for determining an identification area according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a first image and a second image acquired by a motion sensing device according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of another method for determining an identification area according to an embodiment of the present application;
FIG. 5 is a flow chart of another method for determining an identification area according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a determination device for an identification area according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and figures herein are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses a method, a device and electronic equipment for determining an identification area, which can improve the identification rate of limb actions of a user in a somatosensory identification process. The following will describe in detail.
Referring to fig. 1A, fig. 1A is a schematic view of an application scenario of a method for determining an identification area according to an embodiment of the present application, where the application scenario may include an electronic device 10, a somatosensory detection device 20 and a guidance display screen 30.
The electronic device 10 may be, but is not limited to, a personal computer, a notebook computer, a smart phone, a tablet computer, a wearable device, and the like. The electronic device 10 may be connected to the somatosensory detecting apparatus 20 and the guidance display screen 30 by communication, for example, by wireless communication such as bluetooth, wi-Fi (Wireless Fidelity ), and the like, which is not limited in particular.
The motion sensing device 20 may include sensors such as RGB cameras, depth cameras, and microphone arrays, and may sense information such as body gestures, actions, and sounds of the photographed object through the above sensors, thereby implementing various interaction modes such as wireless, gesture, and voice.
In this embodiment, the somatosensory detection device 20 at least includes a first camera and a second camera.
The first camera may include an RGB camera, and the RGB camera may be used to capture a color image of an ambient environment, where the color image may be used to perform face recognition, pose recognition, and the like on a photographic subject.
The second camera may include a depth camera, and the depth camera analyzes the infrared beam reflected from the obstacle, calculates a distance of an object in the surrounding environment, performs three-dimensional modeling based on the distance of the object in the surrounding environment, obtains a depth image, and extracts a skeleton of the human body from the depth image, thereby recognizing a limb motion of the human body.
Because the RGB camera shoots based on the principle of light reflection, the shooting range of the RGB camera is only affected by the irradiation intensity of light; the depth camera shoots by using the infrared light beam, so that the shooting range of the depth camera is limited by not only the intensity of the infrared light beam, but also the measurable distance of the infrared light beam, and therefore, the shooting range of the depth camera is limited under the conditions of longer distance and weaker intensity of the infrared light beam. Therefore, the photographing range of the RGB camera is larger than that of the depth camera.
Since the motion sensing device 20 mainly recognizes the motion of the limb of the photographed object through the depth camera, that is, the motion sensing device 20 mainly recognizes the motion of the photographed object through the depth camera, the photographing range corresponding to the depth camera is the motion sensing range corresponding to the motion sensing device 20.
The somatosensory detection device 20 may collect a first image through a first camera and a second image through a second camera, and send the first image and the second image to the electronic device 10; wherein the first image includes a photographic subject and a guidance display screen 30.
The guiding display screen 30 may include, but is not limited to, an electronic display screen such as an LED display screen or an LCD display screen, and further, the guiding display screen 30 may be a floor display screen horizontally placed on the floor, a vertical display screen installed on the floor, a wall-mounted vertical display screen installed on a wall surface, or the like.
The electronic device 10 may output the target identification frame to the guide display screen 30 to guide the photographed object into the somatosensory detection range corresponding to the somatosensory detection device.
The electronic device 10 may receive the first image and the second image transmitted by the somatosensory detection device 20; identifying first image coordinates of a shooting object in a first image; identifying a shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result; determining a positional relationship between the target identification frame and the guidance display screen 30 in the first image; display coordinates corresponding to the target identification frame in the guide display screen 30 are determined according to the positional relationship, and the target identification frame is output to the guide display screen 30 according to the display coordinates.
As shown in fig. 1B, fig. 1B is a schematic view of an application scenario of another method for determining a recognition area according to an embodiment of the present application, where the guiding display screen 30 is a floor display screen, the somatosensory detection device 20 may be disposed on a ceiling, and a shooting object may move on the guiding display screen 30. The photographic subject can move to the in-frame position of the target identification frame 101 according to the target identification frame 101 output by the guiding display screen 30, so that the somatosensory detection device 20 can effectively perform somatosensory recognition on the photographic subject.
As shown in fig. 1C, fig. 1C is a schematic view of an application scenario of another method for determining an identification area according to an embodiment of the present application, where the guiding display screen 30 is a vertical display screen, and the somatosensory detection device 20 may be disposed on a wall surface facing the guiding display screen 30, which is not limited in particular. The photographic subject may move facing the guide display screen 30. The photographed object can move to a position opposite to the target identification frame 101 according to the target identification frame 101 output from the guide display screen 30, so that the somatosensory detection device 20 can effectively perform somatosensory recognition on the photographed object.
As can be seen, in the embodiment of the present application, the electronic device 10 outputs the target identification frame 101 to the guiding display screen 30 to guide the photographed object to walk into the somatosensory detection range corresponding to the somatosensory detection device 20, so that the problem that the limb motion of the user cannot be identified because the photographed object walks out of the somatosensory detection range of the somatosensory detection device 20 in the somatosensory identification process can be avoided, and the identification rate of the limb motion of the user is improved.
Referring to fig. 2, fig. 2 is a flowchart of a method for determining an identification area disclosed in an embodiment of the present application, where the method for determining an identification area may be applied to the foregoing electronic device, and the electronic device is respectively connected with a guide display screen and a somatosensory detection device in a communication manner, where the somatosensory detection device at least includes a first camera and a second camera, and a shooting range of the first camera is greater than a shooting range of the second camera, and a shooting range of the second camera is a somatosensory detection range corresponding to the somatosensory detection device. As shown in fig. 2, the method comprises the steps of:
201. the first image and the second image transmitted by the somatosensory detection device are received.
The electronic equipment receives the first image and the second image sent by the somatosensory detection device; the first image is acquired by a first camera, the second image is acquired by a second camera, and the first image comprises a shooting object and a guiding display screen.
In this embodiment of the application, the shooting range of first camera can cover shooting object and guide display screen, and consequently the first image that first camera gathered is including shooting object and guide display screen, because the shooting range of second camera is less than first camera, the second image that consequently the second camera obtained need not include shooting object and complete guide display screen.
Referring to fig. 3, fig. 3 is a schematic diagram of a first image and a second image acquired by a motion sensing device according to an embodiment of the present disclosure. Since the second camera may be a depth camera that measures the distance between an object in the surrounding environment and the camera using infrared or other light sources of a specific spectrum, the second image 302 acquired by the second camera is typically black and white. The second image 302 acquired by the second camera is normally black when the somatosensory detection device 20 does not recognize the photographing object; upon recognition of the subject, in the second image 302 acquired by the second camera, the outline of the subject is typically displayed in white or a lighter color, while the background is displayed in black or a darker color. Therefore, according to the different colors or the brightness in the second image 302, the recognition condition of the photographing object can be effectively determined from the second image 302, and the limb movement of the photographing object can be recognized by using the second image 302.
Since the first camera may be an RGB camera, the first image 301 acquired by the RGB camera is generally colored, and the first image 301 includes a photographic subject and a guide display screen 30. The electronic device can effectively recognize the first image coordinates of the photographic subject in the first image 301 using the first image 301, and plan and generate the target identification frame 101 in the first image 301.
According to the embodiment of the application, the electronic equipment effectively utilizes the first image and the second image to generate the target identification frame, so that the efficiency and the accuracy of determining the target identification frame are improved, and the shooting object is guided to walk into the somatosensory detection range corresponding to the somatosensory detection device through the target identification frame, so that the identification rate of the limb actions of the user can be improved in the somatosensory identification process.
202. First image coordinates of a photographic subject in a first image are identified.
The electronic device identifies first image coordinates of the photographic subject in the first image.
The first image coordinates may include pixel coordinates corresponding to a plurality of photographing object pixel points. Specifically, the first image may be composed of a plurality of pixel points arranged in rows and columns; the electronic device may establish an image coordinate system in the first image, where the image coordinate system may be a coordinate system using an upper left corner or a lower left corner of the first image as an origin and using pixels as units; the abscissa and ordinate of each pixel in the image may be the number of columns and rows, respectively, in which it is located in the first image.
As an optional implementation manner, the electronic device may identify the shooting object in the first image through a target identification algorithm, where the target identification algorithm may be an R-CNN algorithm, an SSD algorithm, a YOLO algorithm, or the like, and is not limited in particular; after the shooting object is identified in the first image, determining pixel coordinates corresponding to a plurality of pixel points included in the shooting object in the first image respectively, and obtaining first image coordinates.
As another optional implementation manner, the electronic device may receive a sample image acquired by the first camera of the somatosensory detection device before the shooting object enters the shooting range of the first camera; receiving a first image acquired by a first camera of the somatosensory detection device after a shooting object enters a shooting range of the first camera; performing differential comparison on the sample image and the first image so as to determine pixel coordinates of a plurality of pixel points included in the shooting object in the first image, wherein the pixel coordinates correspond to the pixel points in the first image respectively, and obtaining first image coordinates; specifically, the pixel value of the sample image and the pixel value of the first image may be subtracted to obtain a difference image, and then binarization processing is performed on the difference image, so as to determine pixel coordinates corresponding to a plurality of pixel points included in the shooting object in the first image, and obtain first image coordinates.
203. And identifying the shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result.
The electronic equipment identifies the shooting object of the second image to obtain a first identification result, and generates a target identification frame in the first image according to the first image coordinate and the first identification result.
The method for identifying the shooting object by the electronic device for the second image may refer to the method for identifying the shooting object in the first image, which is not described in detail.
The first recognition result may include recognition of the photographing object at the second image, and not recognition of the photographing object at the second image.
In one embodiment, the electronic device generates the target identifier frame in the first image according to the first image coordinates and the first recognition result, and may include the following steps: if the first identification result indicates that the shooting object is identified in the second image, generating a target identification frame in the first image according to the first image coordinate; the target identification frame includes first image coordinates therein.
If the first recognition result indicates that the photographed object is recognized in the second image, it is described that the actual position of the photographed object is located within the somatosensory detection range corresponding to the somatosensory detection device, so that when the photographed object is recognized in the second image, the first image coordinates of the photographed object in the first image are taken as a part of the target identification frame; therefore, both the frame and the in-frame portion of the target identification frame are composed of the first image coordinates corresponding to the subject in the first image in the case where the subject is recognized in the second image.
204. And determining the position relation between the target identification frame and the guiding display screen in the first image.
The electronic equipment determines the position relation between the target identification frame and the guide display screen in the first image; the positional relationship between the target identification frame and the guide display screen refers to a positional relationship between the target identification frame generated on the first image and the virtual guide display screen included in the first image.
Specifically, the electronic device may determine a positional relationship between the target identifier frame and the virtual guide display screen based on a position of the target identifier frame in the first image and a position of the virtual guide display screen in the first image; the position of the target identification frame may include pixel coordinates corresponding to a plurality of pixel points of the target identification frame, and the position of the virtual guiding display screen may include pixel coordinates corresponding to a plurality of pixel points of the guiding display screen, so that the position relationship may be a coordinate transformation matrix between the pixel coordinates included in the target identification frame and the pixel coordinates included in the virtual guiding display screen.
205. And determining display coordinates corresponding to the target identification frame in the guide display screen according to the position relation, and outputting the target identification frame to the guide display screen according to the display coordinates.
The electronic equipment determines display coordinates corresponding to the target identification frame in the guide display screen according to the position relation, and outputs the target identification frame to the guide display screen according to the display coordinates; it should be noted that, the display coordinates corresponding to the target identification frame in the guiding display screen are display coordinates corresponding to the target identification frame in the guiding display screen of the entity, that is, the electronic device may output the target identification frame to the guiding display screen of the entity according to the display coordinates.
The electronic equipment can determine the display coordinates corresponding to the target identification frame in the guide display screen of the entity according to the size and resolution of the guide display screen of the entity and the position relation between the target identification frame and the guide display screen; the resolution of the guide display screen of the entity is the number of pixels displayed on the guide display screen of the entity, and the number of pixels in each row is usually multiplied by the number of pixels in each column to represent the number of pixels, so that the pixel coordinates of each pixel included in the guide display screen of the entity can be determined based on the size of the guide display screen of the entity and the number of rows and columns of each pixel in the guide display screen of the entity.
Therefore, the electronic device can convert the pixel coordinates corresponding to the pixel points of the plurality of target identification frames included in the target identification frame in the first image into the pixel coordinates of the corresponding plurality of pixels in the guide display screen of the entity as the display coordinates corresponding to the target identification frame in the guide display screen of the entity according to the size and resolution of the guide display screen of the entity and the position relation between the target identification frame and the guide display screen.
The electronic device outputs the target identification frame to the guide display screen of the entity according to the display coordinates, namely, controls a plurality of pixels corresponding to the target identification frame in the guide display screen of the entity to emit light, flash, and the like, so that the identification effect is realized, and the photographed object is guided to enter the somatosensory detection range corresponding to the somatosensory detection device 20.
Referring to fig. 4, fig. 4 is a flowchart of another method for determining a recognition area disclosed in the embodiment of the present application, where the determination method of the recognition area can be applied to the foregoing electronic device, and the electronic device is respectively connected with the guide display screen and the somatosensory detection device in a communication manner, and the somatosensory detection device at least includes a first camera and a second camera, where a shooting range of the first camera is greater than a shooting range of the second camera, and a shooting range of the second camera is a somatosensory detection range corresponding to the somatosensory detection device. As shown in fig. 4, the method comprises the steps of:
401. the first image and the second image transmitted by the somatosensory detection device are received.
The first image is acquired by a first camera, the second image is acquired by a second camera, and the first image comprises a shooting object and a guiding display screen.
The implementation of step 401 may refer to the above embodiments, and is not described in detail.
402. And outputting the initial identification frame to the guide display screen.
The electronic device outputs an initial identification frame to the guide display screen, and it should be noted that the electronic device may output the initial identification frame to the guide display screen of the entity. The electronic device outputs an initial identification frame to a guide display screen of the entity. The location and size of the initial identification frame output by the electronic device to the guidance display screen may be determined based on user-defined parameters. The initial identification frame may be a rectangular frame having a size similar to that of the guide display screen, or the initial identification frame may be an arbitrary-shaped frame located at the center of the guide display screen, which is not particularly limited.
403. First image coordinates of a photographic subject in a first image are identified.
Reference may be made to the above embodiments for specific implementation of step 403, which is not described in detail.
404. And if the shooting object is determined to be in the initial identification frame according to the first image coordinates of the shooting object, and the first identification result indicates that the shooting object is not identified in the second image, adjusting the initial identification frame based on the first image coordinates to obtain the target identification frame.
If the electronic device determines that the shooting object is in the initial identification frame according to the first image coordinates of the shooting object, and the first identification result indicates that the shooting object is not identified in the second image, the electronic device adjusts the initial identification frame based on the first image coordinates to obtain the target identification frame.
The efficiency of determining the target identification frame can be improved by outputting an initial identification frame and then adjusting the initial identification frame to obtain the target identification frame.
If the subject is within the initial identification frame but the subject is not recognized in the second image, it is indicated that the initial identification frame has an area outside the somatosensory detection range of the somatosensory detection device, and the adjustment of the initial identification frame based on the first image coordinates may be performed by cutting the area outside the somatosensory detection range of the somatosensory detection device in the initial identification frame to obtain the target identification frame.
As an optional implementation manner, if the electronic device determines that the shooting object is within the initial identification frame according to the first image coordinate of the shooting object, and the first recognition result indicates that the shooting object is not recognized in the second image, the electronic device adjusts the initial identification frame based on the first image coordinate to obtain the target identification frame, and may include the following steps:
if the shooting object is determined to be on the target boundary of the initial identification frame according to the first image coordinate of the shooting object, and the first identification result indicates that the shooting object is not identified in the second image, moving the target boundary to the center position of the initial identification frame by a target distance to obtain a target identification frame; the target boundary is any boundary of the initial identification box.
It should be noted that, the shooting object is located on the target boundary of the initial identification frame, and may stand on the target boundary of the initial identification frame, or may be the target boundary facing the initial identification frame. For example, if the guiding display screen is a ground display screen, the shooting object may walk to the border of the initial identification frame in the guiding display screen, and stand on each border of the initial identification frame for a period of time, for example, for 5 seconds; if the shooting object is located on the target boundary of the initial identification frame, and the shooting object is not identified in the second image, it is indicated that the target boundary of the initial identification frame is not located in the motion sensing detection range of the motion sensing detection device, so that the electronic device can move the target boundary of the initial identification frame on the guiding display screen of the entity to the center position of the initial identification frame by a target distance, that is, to move the target boundary into the initial identification frame, for example, the target distance can be 10 cm-30 cm, and is not limited in particular.
By executing the steps, the accuracy of determining the target identification frame can be effectively improved, so that in the somatosensory recognition process, a shooting object is guided to enter the somatosensory detection range of the somatosensory detection device through the accurate target identification frame, and the recognition rate of the limb actions of a user is improved.
405. And determining the position relation between the target identification frame and the guiding display screen in the first image.
406. And determining display coordinates corresponding to the target identification frame in the guide display screen according to the position relation, and outputting the target identification frame to the guide display screen according to the display coordinates.
The implementation of step 405 and step 406 may refer to the above examples, and is not specifically limited.
According to the method and the device for identifying the body motion of the user, the initial identification frame is output, the initial identification frame is adjusted according to the first identification result and the first image coordinates, the target identification frame is obtained, the efficiency and the accuracy of determining the target identification frame are improved, the target identification frame is output to the guiding display screen, the shooting object is guided to walk into the body motion detection range corresponding to the body motion detection device, and therefore the problem that the body motion of the user cannot be identified due to the fact that the shooting object is guided out of the body motion detection range of the body motion detection device in the body motion identification process can be avoided, and the identification rate of the body motion of the user is improved.
Referring to fig. 5, fig. 5 is a flowchart of another method for determining a recognition area disclosed in an embodiment of the present application, where the method for determining a recognition area can be applied to the foregoing electronic device, and the electronic device is respectively in communication connection with a guiding display screen and a somatosensory detection device, and the somatosensory detection device at least includes a first camera and a second camera, where a shooting range of the first camera is greater than a shooting range of the second camera, and a shooting range of the second camera is a somatosensory detection range corresponding to the somatosensory detection device. As shown in fig. 5, the method comprises the steps of:
501. The first image and the second image transmitted by the somatosensory detection device are received.
The first image is acquired by a first camera, the second image is acquired by a second camera, and the first image comprises a shooting object and a guiding display screen.
502. First image coordinates of a photographic subject in a first image are identified.
503. And identifying the shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result.
The implementation of steps 501 to 503 may refer to the above embodiments, and are not described in detail.
504. One or more test areas are determined within the target identification frame.
The electronic device determines one or more test areas within the target identification frame. It should be noted that, the electronic device determines one or more test areas in the target identifier frame generated in the first image, where the test area may include a plurality of pixels, and the plurality of pixels included in the test area may be used as test pixels.
The test area may include a circular area centered on the target identification frame, and the radius of the circular area is not limited, for example, the radius may be 20 cm to 50 cm, but is not limited thereto.
505. If the first image coordinates of the shooting object in the first image are matched with the target test area, determining that the shooting object is located in the target test area, and detecting the identification duration of the shooting object identified in the acquired second image when the shooting object is located in the target test area.
If the first image coordinates of the shooting object in the first image are matched with the target test area, the electronic equipment determines that the shooting object is located in the target test area, and detects the identification duration of the shooting object identified in the acquired second image when the shooting object is located in the target test area; the target test area is any test area.
It should be noted that, whether the first image coordinate of the shooting object in the first image is matched with the target test area may be determined by determining the overlapping degree between a plurality of shooting object pixels included in the first image coordinate and a plurality of test pixels included in the target test area, and whether the first image coordinate of the shooting object in the first image is matched with the target test area is determined according to the overlapping degree. If the overlapping degree is larger than the overlapping degree threshold value, judging that a first image coordinate of the shooting object in the first image is matched with the target test area; for example, the overlapping degree threshold may be 80% to 100%, which is not particularly limited.
506. And determining the optimal identification area in the target identification frame from one or more test areas based on the identification time periods of the shooting object in each test area.
The electronic equipment determines the optimal identification area in the target identification frame from one or more test areas based on the identification time length of the shooting object in each test area; the identification time length corresponding to the optimal identification area is smaller than the time length threshold value.
The duration threshold may be 0.5 seconds to 1 second, which is not particularly limited. It should be noted that, in the case where the photographed object enters the target test area, if the electronic device can immediately identify the photographed object in the second image, that is, the identification duration is less than the duration threshold, it is indicated that the photographed object has little change in the perspective of the person in the target test area, and the person feature point is easily captured, so that it is indicated that the target test area is the optimal identification area.
The optimal recognition area may be an optimal somatosensory detection range corresponding to the somatosensory detection device, and when the photographing object enters the optimal recognition area, the electronic device may be capable of rapidly and accurately recognizing the limb motion of the photographing object in the second image.
Therefore, the embodiment of the application firstly determines the target identification frame for guiding the shooting object to walk into the motion sensing detection range corresponding to the motion sensing detection device, and then determines the optimal identification area in the target identification frame so as to acquire the optimal motion sensing detection range corresponding to the motion sensing detection device, thereby further improving the accuracy of identifying the limb motion of the shooting object.
As an alternative embodiment, the test area includes a central image area of the target identification frame; the electronic device determining, from one or more test areas, an optimal recognition area in the target identification frame based on recognition durations of the shooting object in the respective test areas, may include the following steps:
if the identification time length corresponding to the central image area of the target identification frame of the shooting object is smaller than the time length threshold value, determining the central image area of the target identification frame as an optimal identification area; if the identification time length of the shooting object corresponding to the central image area of the target identification frame is greater than a time length threshold value, determining an optimal identification area from the adjacent image areas of the central image area of the target identification frame; the adjacent image area is a test area having a distance from the center image area of the target identification frame less than a distance threshold.
It should be noted that the central image area may include a center point of the target identification frame and a plurality of pixel points near the center point, the center point of the target identification frame and the plurality of pixel points near the center point may be used as a plurality of test pixel points included in the test area, and the size of the central image area is not limited.
The adjacent image area may include a plurality of test areas having different distances from the center image area, and the distance threshold may be 10 cm to 20 cm, which is not particularly limited.
The method comprises the steps of firstly determining the central image area of the target identification frame as a test area, and then carrying out fine adjustment on the central image area because the central image area is easier to be the optimal identification area, and determining the optimal identification area from the adjacent image areas of the central image area, thereby improving the efficiency and the accuracy of determining the optimal identification area, being beneficial to guiding a user to enter the optimal identification area in the somatosensory identification process, and improving the identification accuracy and the identification efficiency of the limb actions of the user.
As another optional implementation manner, after determining the best recognition area in the target identification frame from one or more test areas based on the recognition duration of the shooting object in each test area, the electronic device further performs the following steps:
if the moving track of the shooting object is determined to move from the outside of the target identification frame to the optimal identification area according to the first image coordinates of the shooting object in the multi-frame first image, then the moving track approaches to any boundary of the target identification frame from the optimal identification area, the shooting object is identified in a second image acquired in the moving process of the shooting object, a second identification result is obtained, and the target identification frame is adjusted according to the second identification result.
After the optimal recognition area in the target identification frame is determined, the shooting object enters the optimal recognition area from the outside of the target identification frame, then approaches to four sides of the target identification frame from the optimal recognition area, and a second recognition result for recognizing the shooting object in a second image acquired in the moving process of the shooting object is observed and used for testing the accuracy of the target identification frame and further debugging the target identification frame.
The second recognition result may include recognition of the photographing object in the second image during movement of the photographing object, and non-recognition of the photographing object in the second image during movement of the photographing object.
Illustratively, the accuracy of the target identification frame is tested N times through the steps, where N is an integer greater than 1; if the second identification result indicates that the shooting object is identified in the second image in more than 90% of tests, the accuracy of the target identification frame is up to standard; if the second recognition result indicates that the shooting object is not recognized in the second image in more than 10 times of testing, the target identification frame needs to be further adjusted, and the specific adjustment method may refer to the above embodiment, which is not described in detail.
After the optimal recognition area in the target identification frame is determined, the accuracy of the target identification frame can be further tested, so that the target identification frame is debugged, the accuracy of the target identification frame is further improved, and therefore, the accuracy and the efficiency of recognizing the limb actions of a user can be improved in the somatosensory recognition process.
507. And determining the position relation between the target identification frame and the guiding display screen in the first image.
508. And determining display coordinates corresponding to the target identification frame in the guide display screen according to the position relation, and outputting the target identification frame to the guide display screen according to the display coordinates.
For the specific implementation of steps 507 to 508, reference may be made to the above embodiments, and details are not repeated.
According to the embodiment of the application, the target identification frame used for guiding the shooting object to walk into the somatosensory detection range corresponding to the somatosensory detection device is firstly determined, and then the optimal identification area in the target identification frame is determined, so that the optimal somatosensory detection range corresponding to the somatosensory detection device is obtained, and the accuracy rate of identifying the limb actions of the shooting object is further improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a device for determining an identification area according to an embodiment of the present application. The device can be applied to the electronic equipment, the electronic equipment is respectively in communication connection with the guiding display screen and the somatosensory detection device, the somatosensory detection device at least comprises a first camera and a second camera, the shooting range of the first camera is larger than that of the second camera, and the shooting range of the second camera is the somatosensory detection range corresponding to the somatosensory detection device. As shown in fig. 6, the identification area determining apparatus 600 may include: a receiving module 610, an identifying module 620, a generating module 630, a determining module 640 and an output module 650.
A receiving module 610, configured to receive the first image and the second image sent by the somatosensory detection device; the first image is acquired by a first camera, the second image is acquired by a second camera, and the first image comprises a shooting object and a guiding display screen;
an identification module 620, configured to identify first image coordinates of a shooting object in a first image;
the generating module 630 is configured to identify a shooting object for the second image, obtain a first identification result, and generate a target identification frame in the first image according to the first image coordinate and the first identification result;
a determining module 640, configured to determine a positional relationship between the target identification frame and the guidance display screen in the first image;
and the output module 650 is used for determining display coordinates corresponding to the target identification frame in the guiding display screen according to the position relation and outputting the target identification frame to the guiding display screen according to the display coordinates.
In one embodiment, the generating module 630 is further configured to generate the target identifier frame in the first image according to the first image coordinate if the first identification result indicates that the shooting object is identified in the second image; the target identification frame includes first image coordinates therein.
In one embodiment, the output module 650 is further configured to output an initial identification frame to the guidance display screen after the receiving module 610 receives the first image and the second image sent by the somatosensory detection device;
The generating module 630 is further configured to, if it is determined that the photographic subject is within the initial identification frame according to the first image coordinate of the photographic subject, and the first recognition result indicates that the photographic subject is not recognized in the second image, adjust the initial identification frame based on the first image coordinate, and obtain the target identification frame.
In one embodiment, the generating module 630 is further configured to, if it is determined that the photographic subject is on the target boundary of the initial identification frame according to the first image coordinate of the photographic subject, and the first recognition result indicates that the photographic subject is not recognized in the second image, move the target boundary to the center position of the initial identification frame by a target distance, so as to obtain the target identification frame; the target boundary is any boundary of the initial identification box.
In one embodiment, the determining module 640 is further configured to determine one or more test areas within the target identification frame; if the first image coordinates of the shooting object in the first image are matched with the target test area, determining that the shooting object is located in the target test area, and detecting the identification duration of the shooting object identified in the acquired second image when the shooting object is located in the target test area; the target test area is any test area; determining the optimal identification area in the target identification frame from one or more test areas based on the identification time length of the shooting object in each test area; the identification time length corresponding to the optimal identification area is smaller than the time length threshold value.
In one embodiment, the test area includes a center image area of the target identification frame; the determining module 640 is further configured to determine, as an optimal recognition area, a central image area of the target identifier frame if a recognition duration corresponding to the central image area of the target identifier frame by the photographed object is less than a duration threshold; if the identification time length of the shooting object corresponding to the central image area of the target identification frame is greater than a time length threshold value, determining an optimal identification area from the adjacent image areas of the central image area of the target identification frame; the adjacent image area is a test area having a distance from the center image area of the target identification frame less than a distance threshold.
In one embodiment, the determining device 600 of the identification area may further include an adjustment module;
and the adjusting module is used for identifying the shooting object in a second image acquired in the moving process of the shooting object to obtain a second identification result if the moving track of the shooting object is determined to move from the outside of the target identification frame to the optimal identification area according to the first image coordinates of the shooting object in the multi-frame first image, then the moving track is close to any boundary of the target identification frame from the optimal identification area, and the target identification frame is adjusted according to the second identification result.
According to the method and the device for detecting the body motion of the user, according to the first identification result of the second image on the shooting object, when the shooting object is located in the first image coordinate in the first image, whether the shooting object is located in the body motion detection range of the second camera is determined, so that a target identification frame is generated in the first image based on the first identification result and the first image coordinate, and the target identification frame is output to the guiding display screen, so that the shooting object is guided to walk into the body motion detection range corresponding to the body motion detection device, and therefore the problem that the body motion of the user cannot be identified because the shooting object is guided out of the body motion detection range of the body motion detection device in the body motion identification process is avoided, and the identification rate of the body motion of the user is improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 7, the electronic device 700 may include:
a memory 710 storing executable program code;
a processor 720 coupled to the memory 710;
the processor 720 invokes executable program code stored in the memory 710 to perform any of the identification area determination methods disclosed in the embodiments of the present application.
The embodiment of the application discloses a computer readable storage medium storing a computer program, wherein the computer program, when executed by the processor, causes the processor to implement any one of the identification area determining methods disclosed in the embodiment of the application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments and that the acts and modules referred to are not necessarily required in the present application.
In various embodiments of the present application, it should be understood that the size of the sequence numbers of the above processes does not mean that the execution sequence of the processes is necessarily sequential, and the execution sequence of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-accessible memory. Based on such understanding, the technical solution of the present application, or a part contributing to the prior art or all or part of the technical solution, may be embodied in the form of a software product stored in a memory, including several requests for a computer device (which may be a personal computer, a server or a network device, etc., in particular may be a processor in the computer device) to perform part or all of the steps of the above-mentioned method of the various embodiments of the present application.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The above detailed description of a method, an apparatus, and an electronic device for determining an identification area disclosed in the embodiments of the present application applies specific examples to illustrate principles and implementations of the present application, where the above description of the embodiments is only used to help understand the method and core idea of the present application. Meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. The method is characterized by being applied to electronic equipment, wherein the electronic equipment is respectively in communication connection with a guide display screen and a somatosensory detection device, the somatosensory detection device at least comprises a first camera and a second camera, the shooting range of the first camera is larger than that of the second camera, and the shooting range of the second camera is the somatosensory detection range corresponding to the somatosensory detection device; the method comprises the following steps:
receiving a first image and a second image sent by the somatosensory detection device; the first image is acquired by the first camera, the second image is acquired by the second camera, and the first image comprises a shooting object and the guiding display screen;
Identifying first image coordinates of the shooting object in the first image;
identifying the shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result;
determining a position relationship between the target identification frame and the guide display screen in the first image;
and determining display coordinates corresponding to the target identification frame in the guide display screen according to the position relation, and outputting the target identification frame to the guide display screen according to the display coordinates.
2. The method of claim 1, wherein generating a target identification frame in the first image based on the first image coordinates and the first recognition result comprises:
if the first identification result indicates that the shooting object is identified in the second image, generating a target identification frame in the first image according to the first image coordinate; the first image coordinates are included in the target identification frame.
3. The method of claim 1, wherein after the receiving the first image and the second image transmitted by the somatosensory detection device, the method further comprises:
Outputting an initial identification frame to the guide display screen;
the step of identifying the shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result, including:
and if the shooting object is determined to be in the initial identification frame according to the first image coordinate of the shooting object, and the first identification result indicates that the shooting object is not identified in the second image, adjusting the initial identification frame based on the first image coordinate to obtain the target identification frame.
4. The method according to claim 3, wherein if the photographic subject is determined to be within the initial identification frame according to the first image coordinates of the photographic subject, and the first recognition result indicates that the photographic subject is not recognized in the second image, adjusting the initial identification frame based on the first image coordinates to obtain the target identification frame includes:
if the shooting object is determined to be on the target boundary of the initial identification frame according to the first image coordinate of the shooting object, and the first identification result indicates that the shooting object is not identified in the second image, moving the target boundary to the center position of the initial identification frame by a target distance to obtain the target identification frame; the target boundary is any boundary of the initial identification frame.
5. The method of claim 1, wherein after the generating a target identification frame in the first image based on the first image coordinates and the first recognition result, the method further comprises:
determining one or more test areas within the target identification frame;
if the first image coordinates of the shooting object in the first image are matched with a target test area, determining that the shooting object is located in the target test area, and detecting the identification duration of the shooting object identified in the acquired second image when the shooting object is located in the target test area; the target test area is any one of the test areas;
determining an optimal recognition area in the target identification frame from the one or more test areas based on recognition time periods of the shooting object in the test areas respectively corresponding to the test areas; and the identification time length corresponding to the optimal identification area is smaller than a time length threshold value.
6. The method of claim 5, wherein the test area comprises a center image area of the target identification frame;
the determining, based on the identification duration of the shooting object in each test area, the optimal identification area in the target identification frame from the one or more test areas includes:
If the identification time length corresponding to the shooting object in the central image area of the target identification frame is smaller than the time length threshold value, determining the central image area of the target identification frame as the optimal identification area;
if the identification time length of the shooting object corresponding to the central image area of the target identification frame is greater than the time length threshold value, determining the optimal identification area from the adjacent image areas of the central image area of the target identification frame; the adjacent image area is a test area with a distance from the center image area of the target identification frame less than a distance threshold.
7. The method of claim 5, wherein after determining the best identified region within the target identification frame from the one or more test regions based on the respective corresponding identified time periods for which the subject is in each of the test regions, the method further comprises:
if the moving track of the shooting object is determined to move from the outside of the target identification frame to the optimal identification area according to the first image coordinates of the shooting object in the multi-frame first image, then the moving track approaches to any boundary of the target identification frame from the optimal identification area, the shooting object is identified in a second image acquired in the moving process of the shooting object, a second identification result is obtained, and the target identification frame is adjusted according to the second identification result.
8. The identification area determining device is characterized by being applied to electronic equipment, wherein the electronic equipment is respectively in communication connection with a guide display screen and a somatosensory detection device, the somatosensory detection device at least comprises a first camera and a second camera, the shooting range of the first camera is larger than that of the second camera, and the shooting range of the second camera is the somatosensory detection range corresponding to the somatosensory detection device; the device comprises:
the receiving module is used for receiving the first image and the second image sent by the somatosensory detection device; the first image is acquired by the first camera, the second image is acquired by the second camera, and the first image comprises a shooting object and the guiding display screen;
the identification module is used for identifying first image coordinates of the shooting object in the first image;
the generation module is used for identifying the shooting object for the second image to obtain a first identification result, and generating a target identification frame in the first image according to the first image coordinate and the first identification result;
the determining module is used for determining the position relation between the target identification frame and the guiding display screen in the first image;
And the output module is used for determining the display coordinates corresponding to the target identification frame in the guide display screen according to the position relation and outputting the target identification frame to the guide display screen according to the display coordinates.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the method of any of claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method according to any of claims 1 to 7.
CN202310342079.5A 2023-03-31 2023-03-31 Identification area determining method and device and electronic equipment Pending CN116246061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310342079.5A CN116246061A (en) 2023-03-31 2023-03-31 Identification area determining method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310342079.5A CN116246061A (en) 2023-03-31 2023-03-31 Identification area determining method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116246061A true CN116246061A (en) 2023-06-09

Family

ID=86624416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310342079.5A Pending CN116246061A (en) 2023-03-31 2023-03-31 Identification area determining method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116246061A (en)

Similar Documents

Publication Publication Date Title
CN109583285B (en) Object recognition method
CN204480228U (en) motion sensing and imaging device
EP3373202B1 (en) Verification method and system
CN104581124B (en) The method and apparatus for generating the depth map of scene
KR20180138300A (en) Electronic device for providing property information of external light source for interest object
US20040037450A1 (en) Method, apparatus and system for using computer vision to identify facial characteristics
US20140037135A1 (en) Context-driven adjustment of camera parameters
JP6731097B2 (en) Human behavior analysis method, human behavior analysis device, device and computer-readable storage medium
KR20180109109A (en) Method of recognition based on IRIS recognition and Electronic device supporting the same
CN110895678A (en) Face recognition module and method
KR102369989B1 (en) Color identification using infrared imaging
US11989975B2 (en) Iris authentication device, iris authentication method, and recording medium
CN111258411B (en) User interaction method and device
CN107203743B (en) Face depth tracking device and implementation method
US10140722B2 (en) Distance measurement apparatus, distance measurement method, and non-transitory computer-readable storage medium
CN106650634A (en) Image mirror reflection interference detection and feedback method used for terminal bio-identification
KR20120026956A (en) Method and apparatus for motion recognition
KR101961266B1 (en) Gaze Tracking Apparatus and Method
JP7459959B2 (en) Living body detection device, control method, and program
CN106951077B (en) Prompting method and first electronic device
CN116246061A (en) Identification area determining method and device and electronic equipment
WO2023142396A1 (en) Test method and apparatus for relocation module, device, system, medium, computer program, and computer program product
KR102161488B1 (en) Apparatus and method for displaying product in 3 dimensions
US20160073087A1 (en) Augmenting a digital image with distance data derived based on acoustic range information
CN110888536B (en) Finger interaction recognition system based on MEMS laser scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination