WO2020087383A1 - 一种基于图像识别的控制方法、装置及控制设备 - Google Patents

一种基于图像识别的控制方法、装置及控制设备 Download PDF

Info

Publication number
WO2020087383A1
WO2020087383A1 PCT/CN2018/113160 CN2018113160W WO2020087383A1 WO 2020087383 A1 WO2020087383 A1 WO 2020087383A1 CN 2018113160 W CN2018113160 W CN 2018113160W WO 2020087383 A1 WO2020087383 A1 WO 2020087383A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
shooting
image area
human body
preset
Prior art date
Application number
PCT/CN2018/113160
Other languages
English (en)
French (fr)
Inventor
庞磊
张伟兴
匡正
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/113160 priority Critical patent/WO2020087383A1/zh
Priority to CN201880037903.3A priority patent/CN110770739A/zh
Publication of WO2020087383A1 publication Critical patent/WO2020087383A1/zh
Priority to US17/242,277 priority patent/US20210248362A1/en

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41AFUNCTIONAL FEATURES OR DETAILS COMMON TO BOTH SMALLARMS AND ORDNANCE, e.g. CANNONS; MOUNTINGS FOR SMALLARMS OR ORDNANCE
    • F41A17/00Safety arrangements, e.g. safeties
    • F41A17/08Safety arrangements, e.g. safeties for inhibiting firing in a specified direction, e.g. at a friendly person or at a protected area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the invention relates to the technical field of image processing, and in particular to a control method, device and control device based on image recognition.
  • shooting devices such as shooting toy products
  • some users make incorrect estimates of the power of the shooting devices, which often results in the shooting devices hitting a certain part of the human body, resulting in injuries. Therefore, in the process of using the shooting device, how to prevent the shooting device from hitting the characteristic parts of the human body has become an urgent problem to be solved.
  • Embodiments of the present invention provide a control method, device, control device, and storage medium based on image recognition, which can combine images to perform security control on a shooting device.
  • an embodiment of the present invention provides a control method based on image recognition.
  • the method is applied to a shooting device.
  • the shooting device includes a shooting part and a camera device.
  • the method includes:
  • the shooting section is subjected to a shooting prohibition process.
  • an embodiment of the present invention provides a control device based on image recognition.
  • the control device is configured in a shooting device.
  • the shooting device includes a shooting part and a camera device.
  • the control device includes:
  • a collection module used to call the camera device to collect the environment image of the environment where the shooting device is currently located
  • a processing module configured to call a preset human body characteristic part detection model to perform image area recognition on the environment image collected by the collection module;
  • the processing module is further configured to perform shooting prohibition processing on the shooting section if the recognition result indicates that there is a target image area including a preset human body characteristic part in the environmental image.
  • an embodiment of the present invention provides a control device, the control device is configured in a shooting device, the shooting device includes a shooting part and a camera device, the control device includes a controller and a communication interface, the controller and The communication interfaces are connected to each other, wherein the communication interface is controlled by the processor for sending and receiving instructions, and the controller is for:
  • the shooting section is subjected to a shooting prohibition process.
  • an embodiment of the present invention provides a computer storage medium that stores computer program instructions, which when executed are used to implement the above-described image recognition-based control method.
  • the camera device can be called to collect the environment image of the environment in which the shooting device is currently located, and the preset human body feature detection model can be called to perform image region recognition on the environment image. If the recognition result indicates that the environment image includes If the target image area of the human body characteristic part is set, the shooting part is prohibited from shooting.
  • the shooting device can be safely controlled in combination with images, which is beneficial to improving the safety of the shooting device.
  • 1a is a schematic structural diagram of a shooting device provided by an embodiment of the present invention.
  • FIG. 1b is a schematic diagram of a scenario provided by an embodiment of the present invention.
  • 1c is a schematic diagram of an environment image provided by an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a control method based on image recognition provided by an embodiment of the present invention
  • FIG. 3 is a schematic diagram of image position information of a head and shoulders in a training image provided by an embodiment of the present invention
  • FIG. 4a is a schematic diagram of a training image provided by an embodiment of the present invention.
  • FIG. 4b is a schematic diagram of another training image provided by an embodiment of the present invention.
  • FIG. 4c is a schematic diagram of yet another training image provided by an embodiment of the present invention.
  • 4d is a schematic diagram of yet another training image provided by an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of another control method based on image recognition provided by an embodiment of the present invention.
  • FIG. 6a is a schematic diagram of another environment image provided by an embodiment of the present invention.
  • 6b is a schematic diagram of a first sub-image provided by an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of another control method based on image recognition provided by an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of another control method based on image recognition provided by an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a control device based on image recognition provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a control device according to an embodiment of the present invention.
  • An embodiment of the present invention proposes a control method based on image recognition.
  • the method can be applied to a shooting device.
  • the shooting device may be a toy product or a competitive robot product.
  • the shooting device includes a shooting part and a camera device.
  • the shooting department sends out soft plastic objects to achieve the purpose of hitting a certain target or other competitive robots, and the camera can be used to collect environmental images of the environment where the shooting device is located, providing related functions of image acquisition In order to automatically control the shooting department for safe shooting.
  • the shooting device may be a hand-held independent device, or may be installed in a device that needs a shooting function, such as an unmanned aerial vehicle, an aerial vehicle, a gimbal, and a remote control car.
  • FIG. 1 a shows a shooting device
  • the shooting device includes: a camera device 100 and a shooting part 101. It can be seen that the camera device and the shooting section are both fixed to the main structure of the shooting device, wherein the camera device 100 is located directly above the shooting section 101, and the shooting section 101 can be used for launch processing, such as launching BB bullets and other plastics. Objects, water bombs, etc .; the camera device 100 can be used to collect an environment image directly in front of the shooting part 101 in the environment where the shooting device is currently located.
  • the shooting device may call the camera device to collect the environment map of the current environment, and the collected environment map p1 may be as shown in FIG. 1c.
  • 1c includes: the image area (ie, the shooting image area) 103 corresponding to the shooting estimation area of the shooting section on the environment image p1, the center point O1 of the collected environment map p1, and the center point O2 of the shooting image area, including
  • a target image region 104 of a human body characteristic part is preset, and the preset human body characteristic part is a human body characteristic part, for example, a human body, a human face, a head and shoulder part and other characteristic parts.
  • the shooting device collects the environment map p1
  • the preset human body part detection model can be called to perform image area recognition on p1. If the recognition result indicates that there is a target image area 104 as shown in FIG. 1c in p1, then The shooting department prohibits shooting. For example, the shooting department is prohibited from firing bullets.
  • the shooting device can be safely controlled in combination with the image, so as to avoid the shooting part from hitting the characteristic parts of the human body, which is beneficial to improving the safety of the shooting device.
  • the shooting device 102 in FIG. 1b is only an example, and the specific structure of the shooting device in the present invention can be seen in FIG. 1a.
  • the shooting device in FIG. 1b is also only an example.
  • the shooting device shown can also be mounted on competitive robots, drones and other equipment, or the shooting device is a competitive robot, drone and other equipment with a camera device and a shooting department.
  • FIG. 1b and FIG. 1c are only examples of the scenes involved in the embodiments of the present invention, which are mainly used to illustrate the partial image recognition and shooting of the embodiment of the present invention based on the camera device and the shooting section to control the shooting device safely Control principle.
  • FIG. 2 is a schematic flowchart of a control method based on image recognition provided by an embodiment of the present invention.
  • the method according to the embodiment of the present invention may be executed by a shooting device.
  • the method is applied to a shooting device.
  • the shooting device Including shooting department and camera device.
  • the shooting device calls the camera device in S201 to collect the environment image of the environment where the shooting device is currently located.
  • the shooting device may collect the environment image of the environment where the shooting device is currently located through the camera device at preset time intervals.
  • the preset time interval may be set according to the shooting interval time of the shooting part.
  • the shooting interval time is a time period between the end time of the last shooting of the shooting unit and the start time of the current shooting.
  • the preset time interval may be set to be less than the shooting interval time, so that the shooting section is prohibited from performing shooting processing before the next shooting starts.
  • the shooting device calls the camera device to collect the environment image of the current environment, it may call a preset human body part detection model to perform image region recognition on the environment image in S202.
  • the human body feature detection model may include a head and shoulder detection model.
  • the initial detection model can be trained according to the training images in the sample image set and the annotation information of each training image to obtain the head and shoulders detection model.
  • the head and shoulders detection model after training optimization can be configured into the shooting device for providing detection of the image area including the head and shoulders.
  • the sample image set includes a plurality of training image groups collected in different shooting scenes, the training images in the training image group include an image region of the head and shoulders, and the labeling information includes images corresponding to the head and shoulders in the training image location information.
  • the image area corresponding to the head and shoulders may be a rectangular area.
  • the image position information of the head and shoulders in the training image may include the position information of the upper left and lower right corners of the image area corresponding to the head and shoulders in the training image, and the image position information may reflect the image area corresponding to the head and shoulders
  • the specific position in the training image can also reflect the size of the head and shoulders corresponding to the image local area.
  • the position information of the upper left corner and the lower right corner of the image area corresponding to the head and shoulders in the training image may be the coordinate information of the upper left corner and the lower right corner, respectively.
  • the training data specifically includes: training image p2, the image area 304 including the head and shoulders, the endpoint a corresponding to the upper left corner of 304, the endpoint b corresponding to the lower left corner of 304, the coordinates of point a are (90, 300), and the coordinates of point b are (300,100).
  • the coordinate information of points a and b can be used to determine the image area 304 including the head and shoulders in the environment image. In this case, the coordinate information of points a and b is the image position of the head and shoulders in the training image information.
  • the image area corresponding to the head and shoulders is an area of another shape.
  • the other shape is an n-sided polygon (n is an integer greater than or equal to 3)
  • the above image position information may be the coordinate information of n end points corresponding to the n-sided shape; if the other shape is a circle, then the above The image position information may include the coordinate information of the origin of the circle and the radius length of the circle. In the present invention, this is not specifically limited.
  • the shooting scene may include head and shoulder shooting scenes with different postures or different angles under different lights (such as backlighting or normal lighting).
  • multiple training images of the side head and shoulders taken in the backlight shooting scene can be collected in advance to obtain the first training image group; multiple training images of the front head and shoulders taken in the backlight shooting scene are collected To obtain the second training image group; collect multiple training images of the head and shoulders taken under the backlight shooting scene to obtain the third training image group; collect multiple training images of the front head and shoulders taken under normal lighting, Obtain the fourth training image group; collect multiple training images of the side head and shoulders taken under normal light to obtain the fifth training image group; collect multiple training images of the back head and shoulders taken under normal light to obtain the first Six training image groups.
  • the training images in each training image group can be annotated with the image position information of the head and shoulders, that is, the image position information of the image area including the head and shoulders in the training image can be annotated to obtain the correspondence of each training image Labeling information.
  • the initial detection model can be trained according to a large number of training images and the labeling information of each training image to obtain a head and shoulder that can be used to detect the head and shoulder image area Detection model, and the optimized head and shoulders detection model is configured into the corresponding shooting device, which is used to provide the detection of the head and shoulders image area.
  • the initial detection model may be, for example, an object detection model based on a neural network, and the neural network may be, for example, a convolutional neural network.
  • FIGS. 4a-4d see FIGS. 4a-4d.
  • Fig. 4a is the training image p3 including the corresponding image area of the side head and shoulders collected under normal lighting, and the coordinates of a1 and b1 are the image position information of the head and shoulders in the training image p3;
  • Fig. 4b is collected under normal lighting
  • the coordinates of the training image p4 including the image area corresponding to the back head and shoulders, the coordinates of a2 and b2 are the image position information of the head and shoulders in the training image p4;
  • FIG. 4a is the training image p3 including the corresponding image area of the side head and shoulders collected under normal lighting
  • the coordinates of a1 and b1 are the image position information of the head and shoulders in the training image p3
  • Fig. 4b is collected under normal lighting
  • the coordinates of the training image p4 including the image area corresponding to the back head and shoulders, the coordinates of a2 and b2 are the image position
  • 4c is the image area corresponding to the front head and shoulders collected under the backlight shooting scene
  • the coordinates of the training images p5, a3 and b3 are the position information of the head and shoulders in the training image p5
  • Figure 4d is the training images p6, a3 and b3 collected under the backlight shooting scene including the corresponding image area of the back head and shoulder
  • the coordinates are the image position information of the head and shoulders in the training image p6.
  • the above-mentioned head and shoulder detection model is collected through a large number of different shooting scenes (such as backlighting, normal lighting, frontal head and shoulders, side head and shoulders, bowed head and shoulders and back head and shoulders, etc.).
  • the training images of the head and shoulder feature parts are obtained by training the initial detection model.
  • the training head and shoulder detection model obtains the shooting light corresponding to the input image and the shooting angle corresponding to the face of the input image All have high robustness, and people's front face, side face, head down and head can be recognized.
  • the head and shoulders detection model appropriately reduces the detected image area, which ensures that the shooting device accidentally injured the key parts of the human face, etc. It also helps to ensure the playability of shooting toys. This is because the range of the image area for human detection is relatively large. If the shooting of the entire human body is restricted, the playability of the shooting device is seriously affected, especially for certain shooting toy products, even if the human leg is shot Departments, stomachs and other parts are relatively less harmful.
  • the shooting device invokes a preset human body part detection model in S202 to perform image region recognition on the environmental image, if the recognition result indicates that there is a target image region including the preset human body part in the environmental image in S203, the shooting unit Prohibit shooting.
  • the preset human body characteristic part is a human body characteristic part.
  • the preset human body feature parts may include head and shoulder feature parts, and in addition, the preset human body feature parts may also include human body, human face and other feature parts.
  • the preset human body characteristic part has an association relationship with the human body characteristic part detection model.
  • the preset human body feature part can include a human body; if the human body feature part detection model can be used to detect a human body part detection model Image area, then the preset human feature part may include a human face; if the human feature detection model can be used to detect an image area including a head and shoulder feature part, then the preset human feature part may include a head and shoulder feature part .
  • the shooting device may control the shooting section to prohibit shooting, such as prohibiting the shooting section from firing bullets.
  • the shooting function of the shooting part in the shooting device can be actively restricted, which is beneficial to avoid the shooting part shooting to the characteristic part of the human body.
  • the shooting device may call the camera device to collect the environment image of the environment in which the shooting device is currently located, and call a preset human body part detection model to perform image region recognition on the environment image, if the recognition result indicates the presence in the environment image If the target image area includes preset human body features, the shooting section is prohibited from shooting.
  • the shooting function of the shooting part in the image limiting shooting device can be combined, which is beneficial to avoid the shooting part from hitting the characteristic parts of the human body and improve the safety of the shooting device.
  • FIG. 5 is a schematic flowchart of another image recognition-based control method provided by an embodiment of the present invention.
  • the method according to the embodiment of the present invention may be executed by a shooting device, and the method is applied to a shooting device.
  • the shooting The device includes a shooting section and a camera device.
  • the shooting device may call the camera device to collect the environment image of the environment where the shooting device is currently located in S501, and call the preset human body part detection model to the environment image in S502 Perform image area recognition.
  • step S501 to step S502 reference may be made to the related description of step S201 to step S202 in the foregoing embodiment, and details are not described herein again.
  • the shooting device invokes the preset human body part detection model to perform image area recognition on the environmental image, if the recognition result indicates that there is a target image area including the preset human body characteristic part in the environmental image in S503, the target image area and the shooting are determined Whether the preset control relationship is satisfied between the image areas.
  • the shooting image area is an image area corresponding to the shooting estimation area of the shooting part of the shooting device on the environment image.
  • both the shooting unit and the camera device are built into the shooting device, and the two have a linkage relationship.
  • the shooting image area can be determined according to the installation relationship between the shooting unit and the camera device. Specifically, the shooting image area may be determined according to the orientation of the imaging device compared to the shooting section, the installation distance from the shooting section, and the like.
  • Fig. 1b includes a shooting device 102, which includes a camera device and a shooting unit.
  • the camera device is located directly above the shooting unit.
  • the installation distance between the camera device and the shooting unit is d.
  • the shooting device can call the camera device to collect the environment of the current environment.
  • Figure, the collected environment map p1 can be shown in Figure 1c, the size of the environment map p1 is 640 * 300.
  • Figure 1c includes: shooting image area 103, the center point O1 of the acquired environment map p1, the center point O2 of the shooting image area, where the coordinates of O1 are (300, 150), and the coordinates of O2 are (300, 200) .
  • the shooting image area 103 is also directly below the environmental image p1, and the center point O2 of the shooting image area is the same as the horizontal coordinate of the center point O1 of the environmental image area,
  • the distance d1 between O1 and O2 is related to the vertical installation distance d.
  • the shooting image area is located directly above the environment image. It can be seen that the position of the shooting image area changes with the orientation of the camera device compared to the shooting unit and the installation distance d between the camera device and the shooting unit.
  • the shooting device determines whether the target image area and the shooting image area satisfy the preset control relationship, if it is determined in S504 that the target image area and the shooting image area satisfy the preset control relationship, the shooting section is prohibited from performing shooting processing .
  • the above-mentioned preset control relationship may include the overlapping relationship between the target image area and the shooting image area, the distance between the target image area and the shooting image area, and the inclusion relationship between the target image area and the shooting image area.
  • the shooting device may determine whether the overlap between the target image area and the shot image area is greater than or equal to the preset overlap degree threshold ; If yes, determine that the target image area and the shot image area satisfy the preset control relationship.
  • the overlapping degree may refer to the ratio of the overlapping area between the target image area and the shooting image area to the target image area, and the preset overlapping degree threshold may be a preset ratio threshold. In one embodiment, if the preset ratio threshold is 30%, the area of the target image area is 100.
  • the shooting device detects that the overlap area between the target image area and the shot image area is 50, it can calculate that the overlap area between the target image area and the shot image area accounts for 50% of the target image area. If 50% is greater than the preset threshold of 30%, it can be determined that the target image area and the shot image area satisfy the preset control relationship.
  • the shooting device may determine whether the distance between the target image area and the shot image area is less than or equal to the preset distance; if so, It is determined that the target image area and the shot image area satisfy the preset control relationship.
  • the distance between the target image area and the shooting image area may be the distance between the center of the target image area and the center of the shooting image area, and the preset distance is the preset target image area. The distance threshold between the center and the center of the shot image area.
  • the distance between the target image area and the shooting image area may also be the minimum edge distance between the edge of the target image area and the edge of the shooting image area, and the preset distance is a preset The minimum border threshold between the target image area and the shot image area.
  • the shooting device may determine whether the target image area is located in the shot image area, and if so, the target image area and the shot image may be determined The preset control relationship is satisfied between the regions.
  • the shooting device may determine whether the shooting image area is located in the target image area, and if so, determine that the target image area and the shooting image area satisfy a preset control relationship.
  • the camera device can be called to collect the environment image of the environment in which the shooting device is currently located, and the preset human body feature detection model can be called to perform image region recognition on the environment image. If the recognition result indicates that the environment image includes Set the target image area of the human body feature part, determine whether the target image area and the shooting image area satisfy the preset control relationship, if it is determined that the target image area and the shooting image area satisfy the preset control relationship, the shooting section is prohibited Perform shooting treatment.
  • the shooting function of the shooting part in the shooting device can be limited, which is beneficial to avoid the shooting part from hitting the characteristic parts of the human body and improve the safety of the shooting device. The conditions for prohibiting shooting are too strict, resulting in a decrease in the playability of the shooting device.
  • FIG. 7 is a schematic flowchart of yet another image recognition-based control method provided by an embodiment of the present invention.
  • the method of the embodiment of the present invention may be executed by a shooting device.
  • the method is applied to a shooting device.
  • the shooting The device includes a shooting section and a camera device.
  • the shooting device may call the camera device to collect environmental images of the environment where the shooting device is currently located in S701.
  • the camera device may call the camera device to collect environmental images of the environment where the shooting device is currently located in S701.
  • the shooting device calls the camera device to collect the environment image of the environment in which the shooting device is currently located, it may intercept the image area from the environment image to obtain a first sub-image in S702, and call a preset human body part detection model
  • the image is subjected to image region recognition processing; in S703, a preset human body feature detection model can be called to perform full-image detection and recognition on the environmental image.
  • the shooting device after the shooting device intercepts the image area from the environmental image to obtain the first sub-image, it can enlarge the first sub-image to the target image size corresponding to the environmental image, obtain the first enlarged sub-image, and call the preset
  • the human body feature detection model performs image region recognition processing on the first enlarged sub-image.
  • the target image size corresponding to the environment image is preset, in one embodiment, it can be the same size as the training image used when training the human body part detection model, which is conducive to improving the image recognition of the human body part detection model Accuracy; in another embodiment, it can also be the same as the image size of the environment image.
  • the above-mentioned first sub-image may be obtained according to the shot image area.
  • the first sub-image may be an image corresponding to the shot image area.
  • the size of the target image corresponding to the environment image is 640 * 300
  • the size of the environment image p7 is 640 * 300
  • the size of the shooting image area 604 is 210 * 180
  • the coordinates of the endpoint b5 in the lower right corner of 604 is (300,90).
  • the shooting device can intercept the image corresponding to the shooting image area from the environmental image shown in FIG.
  • the shooting device can call a preset human body feature detection model to perform full-image detection and recognition on the environment image.
  • Step S703 of full picture detection and recognition (hereinafter referred to as full picture recognition) can be performed synchronously.
  • the preset human body feature detection model will be used to perform image area recognition on the first sub-image, and the resulting recognition result is called the first recognition result; the preset human body feature detection model will be called to the environment
  • the image is subjected to full-image detection and recognition, and the obtained recognition result is called a second recognition result.
  • the half-picture recognition and the full-picture recognition are stopped.
  • the shooting device may also first perform the step S702 of calling the preset human body part detection model to perform image region recognition on the first sub-image. If it is recognized that the first sub-image does not include the preset The target image area of the human body feature part, that is, it is recognized that there is no target image area including the preset human body feature part in the environment image, and then the above steps of calling the preset human body feature part detection model to perform full image detection and recognition on the environment image are performed S703.
  • the recognition result indicates that there is a target image area including a preset human body feature part in the environment image in S704, it is determined whether the preset control relationship is satisfied between the target image area and the shot image area.
  • the recognition result may include a result of performing image area recognition on the first sub-image (ie, the first recognition result) and / or a result of performing full-image detection and recognition on the environment image (ie, the second recognition result).
  • the shooting device not only calls a preset human body feature detection model to perform image area recognition processing on the first sub-image, but also calls a preset human body feature detection model to perform full-image detection and recognition on the environmental image.
  • a preset human body feature detection model to perform full-image detection and recognition on the environmental image.
  • step S704 After the shooting device determines in S704 whether the target image area and the shot image area satisfy the preset control relationship, if it is determined in S705 that the target image area and the shot image area satisfy the preset control relationship, the shooting section is prohibited Perform shooting treatment.
  • step S705 For the specific implementation manner of step S705, reference may be made to the related description of step S504 in the foregoing embodiment, and details are not described herein again.
  • the shooting device may call the camera device to collect an environmental image of the environment in which the shooting device is currently located, intercept the image area from the environmental image to obtain a first sub-image, and call a preset human body part detection model A sub-image is used for image area recognition processing, and a preset human body feature detection model is invoked to perform full-image detection and recognition on the environmental image.
  • the target image is determined Whether the preset control relationship is satisfied between the area and the shooting image area, and the preset human body feature detection model is invoked to perform image area recognition on the environment image, if the recognition result indicates that there is a target image including the preset human body feature in the environment image In the area, the shooting department is prohibited from shooting.
  • the half-image recognition and the full-image recognition are used to alternately detect the human body feature parts, which is conducive to improving the detection efficiency of the human body feature parts; Shooting to the characteristic parts of the human body to improve the safety of the shooting device.
  • FIG. 8 is a schematic flowchart of yet another image recognition-based control method provided by an embodiment of the present invention.
  • the method according to the embodiment of the present invention may be executed by a shooting device.
  • the method is applied to a shooting device.
  • the shooting The device includes a shooting section, a camera device, and an infrared camera device.
  • the shooting device may call the camera device to collect an environment image of the environment where the shooting device is currently located in S801.
  • the camera device may call the camera device to collect an environment image of the environment where the shooting device is currently located in S801.
  • step S801 reference may be made to the relevant description of step 201 in the foregoing embodiment, and details are not described herein again.
  • the thermal image area can be determined from the infrared environmental image captured by the infrared camera device in S802, and the image area is intercepted from the environmental image according to the heat image area in S803 Get the second sub-image.
  • the size of the infrared environment image and the environment image are the same.
  • the shooting device determines the starting thermal image area from the infrared environment image captured by the infrared camera device, it can obtain the position of the heat image area in the infrared environment image Based on the position information, the second sub-image is obtained from the environment image to an image area with the same area as the heat-generating image area.
  • the heat image area may be a rectangular area
  • the position information of the heat image area in the infrared environment image may be the position coordinates of the upper left corner and the position coordinates of the lower right corner of the heat image area.
  • the position coordinates of the end point a6 of the upper left corner of the heat image area are (90, 300)
  • the position coordinates of the end point b6 of the lower right corner of the heat image area It is (300, 100).
  • the shooting device can determine the second sub-image in the environment image according to the position coordinates of the end point a6 and the end point b6 in the infrared environment image, and the second sub-image is in the upper left corner of the environment image at the end points a7 and a6
  • the coordinates are the same, and the coordinates of the endpoints b7 and b6 in the lower right corner of the environment image are the same.
  • the shooting device After the shooting device obtains the second sub-image, it may call a preset human body part measurement model to perform image region recognition processing on the second sub-image in S804.
  • the shooting device after the shooting device intercepts the image area from the environment image according to the heat image area to obtain the second sub-image, it can also enlarge the second sub-image to the target image size corresponding to the environment image to obtain the second enlarged sub-image And call a preset human body feature detection model to perform image region recognition processing on the second enlarged sub-image.
  • the target image size corresponding to the environment image is preset, in one embodiment, it can be the same size as the training image used when training the human body part detection model, which is conducive to improving the image recognition of the human body part detection model Accuracy; in another embodiment, it can also be the same as the image size of the environment image.
  • the shooting device invokes the preset human body characteristic part measurement model to perform image area recognition processing on the second sub-image, if the recognition result indicates that there is a target image area including the preset human body characteristic part in the environmental image in S805, the shooting part Shooting is prohibited.
  • the distance between the human body and the shooting device corresponding to the preset human body characteristic part in the environmental image can also be determined, when the distance is less than or equal to the shooting device When the maximum range distance is set, the shooting department is prohibited from shooting.
  • the shooting device may calculate the distance between the human body corresponding to the preset human body feature part in the environmental image and the shooting device according to the proportion of the target image area in the environmental image.
  • the shooting device can obtain the focal length and the current focusing distance of the camera device.
  • the human body corresponding to the preset human body feature part in the environmental image occupies the screen The larger the ratio, the shorter the distance between the human body corresponding to the preset human body feature part and the camera.
  • the distance between the human body corresponding to the preset human body characteristic part and the shooting device can be determined according to the above ratio.
  • the shooting device may also be pre-configured with a distance sensor.
  • the shooting device can acquire the sensing data of the distance sensor, and obtain the distance between the human body (hereinafter referred to as the target human body) corresponding to the preset human body characteristic part in the environmental image and the shooting device according to the sensing data.
  • the distance sensor may be an infrared distance sensor, which can sense the distance between the obstacle and the shooting device in the current environment. Since the shooting device has executed step S805 and has determined that there is a target image area including the preset human body feature part in the environmental image, in this case, the human body corresponding to the preset human body feature part in the environmental image may be regarded as the obstacle Kind of.
  • the basic principle of the distance measurement of the infrared distance sensor is that the light-emitting tube emits infrared light, and the light-sensitive receiver tube receives the light reflected by the object in front of it, thereby judging whether there is an obstacle in front.
  • the distance of the object can be judged according to the intensity of the reflected light.
  • Its principle is that the light intensity received by the receiving tube changes with the distance of the reflecting object. The reflected light intensity is close at a short distance, and the reflected light intensity is weak at a long distance.
  • the infrared ranging sensor and the camera device are both set in the shooting device, and the infrared ranging sensor can sense the distance of various obstacles in the current environment from the shooting device, that is, the infrared ranging sensor
  • the sensing data includes distance data corresponding to various obstacles.
  • the shooting device can extract the distance data of the target human body from the sensing data of the infrared ranging sensor for various obstacles according to the calibration relationship between the infrared ranging sensor and the camera. Further, the shooting device may obtain the distance between the target human body and the shooting device according to the distance data of the target human body.
  • the sensing data of the infrared ranging sensor may be a distance image corresponding to the current environment, that is, a depth map.
  • the depth map includes multiple pixel points, and the value of each pixel point can represent the distance between the obstacle corresponding to the point and the infrared ranging sensor, that is, each pixel point can correspond to one piece of distance data.
  • the shooting device can obtain the image position information of the target image area in the environment image, and determine the depth area corresponding to the target image area in the depth map according to the image position information, and then according to the infrared distance sensor and the camera ’s The calibration relationship adjusts the depth area to obtain the target depth area.
  • the shooting device can determine the pixels located in the target depth area as the target pixels from all pixels corresponding to the depth map, and obtain the distance data of the target pixels, which can be obtained according to the distance data of the target pixels
  • the distance between the target human body and the shooting device is the distance data of the target human body.
  • the distance data of all target pixels may be averaged, and the distance data obtained by the averaging is the distance data of the target human body.
  • the above calibration relationship can represent the installation position of the infrared ranging sensor and the camera, and the installation distance l 1 .
  • the shooting device may move the depth area up by k 1 * l 1 in the depth map.
  • the area obtained after the shift is the target depth area, the k 1 is greater than 0, and its specific value can be set in advance.
  • the shooting device can move the depth area down by k 1 * l 1 in the depth map, and the area obtained after moving down is the target depth region.
  • the shooting device may call the camera device to collect the environment image of the environment in which the shooting device is currently located, determine the starting thermal image area from the infrared environment image captured by the infrared camera device, and intercept the image from the environment image according to the heat image area
  • the second sub-image is obtained from the area, and the preset human body characteristic measurement model is called to perform image area recognition on the second sub-image. If the recognition result indicates that there is a target image area including the predetermined human body characteristic part in the environmental image, the shooting part Prohibit shooting.
  • the shooting function of the shooting part in the shooting device can be limited, which is beneficial to avoid the shooting part from hitting the human body characteristic parts and improve the safety of the shooting device.
  • an embodiment of the present invention further provides a control device based on image recognition as shown in FIG. 9, the control device is configured in a shooting device, and the shooting device includes Shooting unit and camera device, the control device includes:
  • the collection module 90 is used to call the camera device to collect the environment image of the environment where the shooting device is currently located;
  • the processing module 91 is configured to call a preset human body feature detection model to perform image area recognition on the environment image collected by the collection module 90;
  • the processing module 91 is further configured to perform shooting prohibition processing on the shooting section if the recognition result indicates that there is a target image area including a preset human body characteristic part in the environmental image.
  • the processing module 91 is further configured to train the initial detection model according to the training images in the sample image set and the label information of each training image to obtain the human body feature detection model; wherein, the The sample image set includes a plurality of training image groups collected in different shooting scenes, the training images in the training image group include an image area of the head and shoulders, and the annotation information includes images corresponding to the head and shoulders in the training image location information.
  • the processing module 91 is further configured to determine whether a preset control relationship is satisfied between the target image area and the shooting image area.
  • the shooting image area is the shooting estimation area of the shooting part in all The corresponding image area on the environmental image; if it is, then the shooting section is prohibited from shooting.
  • the processing module 91 is further configured to determine the target image area and the shot image area if the overlap between the target image area and the shot image area is greater than or equal to a preset overlap degree threshold Satisfy the preset control relationship; or, if the distance between the target image area and the shot image area is less than or equal to a preset distance threshold, determine that the target image area and the shot image area satisfy the preset control relationship Or, if the target image area is located in the shooting image area or the shooting image area is located in the target image area, it is determined that the target image area and the shooting image area satisfy a preset control relationship.
  • the processing module 91 is further used to determine the distance between the human body corresponding to the preset human body feature part in the environmental image and the shooting device; when the distance is less than or equal to the shooting device When the maximum range distance is preset, the step of performing a shooting prohibition process on the shooting part is triggered.
  • the processing module 91 is further configured to calculate the relationship between the human body corresponding to the preset human body features in the environmental image and the shooting device according to the proportion of the target image area in the environmental image Distance; or, acquire the sensing data of the distance sensor, and obtain the distance between the human body and the shooting device corresponding to the preset human body characteristic part in the environmental image according to the sensing data.
  • the processing module 91 is further configured to intercept the image area from the environment image to obtain a first sub-image, and call a preset human body part detection model to perform an image area on the first sub-image Recognition processing; and / or, calling a preset human body feature detection model to perform full-image detection and recognition on the environment image.
  • the processing module 91 is further configured to enlarge the first sub-image to the target image size corresponding to the environment image to obtain the first enlarged sub-image; call a preset human body part detection model Image region recognition processing is performed on the first enlarged sub-image.
  • the first sub-image is obtained according to the shooting image area.
  • the shooting device further includes an infrared camera device
  • the processing module 91 is further configured to determine a thermal image area from the infrared environment image captured by the infrared camera device; The image area is intercepted in the environment image to obtain a second sub-image; a preset human body feature detection model is called to perform image area recognition processing on the second sub-image.
  • the processing module 91 is further used to enlarge the second sub-image to the target image size corresponding to the environment image to obtain the second enlarged sub-image; call a preset human body part detection model Image area recognition processing is performed on the second enlarged sub-image.
  • the preset human body features include head and shoulder features.
  • control device 10 is a schematic block diagram of a structure of a control device provided by an embodiment of the present invention, the control device is configured in a shooting device, the shooting device includes a shooting part and a camera device, the control device may include a controller 10.
  • the communication interface 11 and the memory 12, the controller 10, the communication interface 11 and the memory 12 are connected through a bus, and the memory 12 is used to store program instructions and image data (such as environmental images).
  • the memory 12 may include volatile memory (volatile memory), such as random-access memory (RAM); the memory 12 may also include non-volatile memory (non-volatile memory), such as flash memory (flash memory), solid state drive (SSD), etc .; the memory 12 may also be a double-rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR); the memory 12 may also include a combination of the above types of memory.
  • volatile memory volatile memory
  • non-volatile memory such as flash memory (flash memory), solid state drive (SSD), etc .
  • flash memory flash memory
  • SSD solid state drive
  • DDR double-rate synchronous dynamic random access memory
  • DDR double-rate synchronous dynamic random access memory
  • the memory 12 is used to store a computer program
  • the computer program includes program instructions
  • the controller 10 is configured to execute when the program instructions are invoked: the camera device is invoked to collect the shooting An environmental image of the environment in which the device is currently located; calling a preset human body part detection model to perform image area recognition on the environmental image; if the recognition result indicates that there is a target image area including the preset human body part in the environmental image, then The shooting section is prohibited from shooting.
  • the controller 10 is further configured to determine whether a predetermined control relationship is satisfied between the target image area and the shooting image area.
  • the shooting image area is the shooting estimation area of the shooting part in all The corresponding image area on the environmental image; if it is, then the shooting section is prohibited from shooting.
  • the controller 10 is further configured to determine the target image area and the shot image area if the overlap between the target image area and the shot image area is greater than or equal to a preset overlap degree threshold Satisfy the preset control relationship; or, if the distance between the target image area and the shot image area is less than or equal to a preset distance threshold, determine that the target image area and the shot image area satisfy the preset control relationship Or, if the target image area is located in the shooting image area or the shooting image area is located in the target image area, it is determined that the target image area and the shooting image area satisfy a preset control relationship.
  • the controller 10 is further configured to determine the distance between the human body corresponding to the preset human body feature part in the environmental image and the shooting device; when the distance is less than or equal to the shooting device When the maximum range distance is preset, the step of performing a shooting prohibition process on the shooting part is triggered.
  • the controller 10 is further configured to calculate the relationship between the human body corresponding to the preset human body features in the environmental image and the shooting device according to the proportion of the target image area in the environmental image Distance; or, acquire the sensing data of the distance sensor, and obtain the distance between the human body and the shooting device corresponding to the preset human body characteristic part in the environmental image according to the sensing data.
  • the controller 10 is further configured to intercept the image area from the environmental image to obtain a first sub-image, and call a preset human body part detection model to perform an image area on the first sub-image Recognition processing; and / or, calling a preset human body feature detection model to perform full-image detection and recognition on the environment image.
  • the controller 10 is further configured to enlarge the first sub-image to the target image size corresponding to the environment image to obtain the first enlarged sub-image; call a preset human body part detection model Image region recognition processing is performed on the first enlarged sub-image.
  • the first sub-image is obtained according to the shooting image area.
  • the shooting device further includes an infrared camera device
  • the controller 10 is further configured to determine a thermal image area from an infrared environment image captured by the infrared camera device; The image area is intercepted in the environment image to obtain a second sub-image; a preset human body feature detection model is called to perform image area recognition processing on the second sub-image.
  • the controller 10 is further configured to enlarge the second sub-image to the target image size corresponding to the environment image to obtain the second enlarged sub-image; call a preset human body feature detection model Image area recognition processing is performed on the second enlarged sub-image.
  • the preset human body features include head and shoulder features.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种基于图像识别的控制方法、装置、控制设备及计算机存储介质,所述方法包括可以调用摄像装置采集射击装置当前所处环境的环境图像,并调用预设的人体特征部位检测模型对该环境图像进行图像区域识别,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则对射击部进行禁止射击处理。采用本申请,可以结合图像对射击装置进行安全控制,有利于提高射击装置的安全性。

Description

一种基于图像识别的控制方法、装置及控制设备 技术领域
本发明涉及图像处理技术领域,尤其涉及一种基于图像识别的控制方法、装置及控制设备。
背景技术
长期以来,射击类装置(如射击类玩具产品)都深受广大用户的喜欢。在这些射击类装置的使用过程中,由于某些用户对射击类装置的威力做出错误的估计,常常导致射击类装置射击到人体的某个特征部位,导致伤人事件发生。因此,在射击类装置的使用过程中,如何避免射击类装置射击到人体的特征部位成为一个亟待解决的问题。
发明内容
本发明实施例提供了一种基于图像识别的控制方法、装置、控制设备及存储介质,可以结合图像对射击装置进行安全控制。
一方面,本发明实施例提供了一种基于图像识别的控制方法,所述方法应用于射击装置,所述射击装置包括射击部和摄像装置,该方法包括:
调用所述摄像装置采集所述射击装置当前所处环境的环境图像;
调用预设的人体特征部位检测模型对所述环境图像进行图像区域识别;
若识别结果指示所述环境图像中存在包括预设人体特征部位的目标图像区域,则对所述射击部进行禁止射击处理。
另一方面,本发明实施例提供了一种基于图像识别的控制装置,所述控制装置配置于射击装置,所述射击装置包括射击部和摄像装置,所述控制装置包括:
采集模块,用于调用所述摄像装置采集所述射击装置当前所处环境的环境图像;
处理模块,用于调用预设的人体特征部位检测模型对所述采集模块采集到的所述环境图像进行图像区域识别;
处理模块,还用于若识别结果指示所述环境图像中存在包括预设人体特征部位的目标图像区域,则对所述射击部进行禁止射击处理。
再一方面,本发明实施例提供了一种控制设备,所述控制设备配置于射击装置,所述射击装置包括射击部和摄像装置,该控制设备包括控制器和通信接口,所述控制器和通信接口相互连接,其中,所述通信接口受所述处理器的控制用于收发指令,所述控制器用于:
调用所述摄像装置采集所述射击装置当前所处环境的环境图像;
调用预设的人体特征部位检测模型对所述环境图像进行图像区域识别;
若识别结果指示所述环境图像中存在包括预设人体特征部位的目标图像区域,则对所述射击部进行禁止射击处理。
再一方面,本发明实施例提供一种计算机存储介质,该计算机存储介质存储有计算机程序指令,该计算机程序指令被执行时用于实现上述的基于图像识别的控制方法。
本发明实施例中,可以调用摄像装置采集射击装置当前所处环境的环境图像,并调用预设的人体特征部位检测模型对该环境图像进行图像区域识别,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则禁止射击部进行射击处理。采用本发明,可以结合图像对射击装置进行安全控制,有利于提高射击装置的安全性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a是本发明实施例提供的一种射击装置的结构示意图;
图1b是本发明实施例提供的一种场景示意图;
图1c是本发明实施例提供的一种环境图像的示意图;
图2是本发明实施例提供的一种基于图像识别的控制方法的流程示意图;
图3是本发明实施例提供的一种训练图像中头肩部的图像位置信息的示 意图;
图4a是本发明实施例提供的一种训练图像的示意图;
图4b是本发明实施例提供的另一种训练图像的示意图;
图4c是本发明实施例提供的又一种训练图像的示意图;
图4d是本发明实施例提供的又一种训练图像的示意图;
图5是本发明实施例提供的另一种基于图像识别的控制方法的流程示意图;
图6a是本发明实施例提供的另一种环境图像的示意图;
图6b是本发明实施例提供的一种第一子图像的示意图;
图7是本发明实施例提供的又一种基于图像识别的控制方法的流程示意图;
图8是本发明实施例提供的又一种基于图像识别的控制方法的流程示意图;
图9是本发明实施例提供的一种基于图像识别的控制装置的结构示意图;
图10是本发明实施例提供的一种控制设备的结构示意图。
具体实施方式
本发明实施例提出了一种基于图像识别的控制方法,该方法可以应用于射击装置,该射击装置可以是一些玩具类的产品或者竞技机器人产品,在该射击装置上包括射击部和摄像装置,其中,射击部通过发出软性的塑料制品的物体来达到击打某个目标或者其他竞技机器人的目的,而该摄像装置可以用于采集射击装置所处环境的环境图像,提供图像采集的相关功能,以便于自动地对射击部进行安全射击控制。
该射击装置可以是手持的独立设备,也可以被设置于无人机、航拍飞行器、云台、遥控汽车等需要射击功能的设备中。如图1a,图1a示出了一种射击装置,该射击装置包括:摄像装置100和射击部101。可以看出,该摄像装置和射击部内均固定于射击装置的主体结构上,其中,摄像装置100位于射击部101的正上方,该射击部101可以用于发射处理,如发射BB弹等塑料制的物体、水弹等;该摄像装置100可以用于采集射击装置当前所处环境中所述射击 部101的正前方的环境图像。
再请参见图1b,射击装置可以调用摄像装置采集当前所处环境的环境图,采集得到的环境图p1可以如图1c所示。在图1c中包括:射击部的射击估计区域在环境图像p1上所对应的图像区域(即射击图像区域)103,采集得到的环境图p1的中心点O1,射击图像区域的中心点O2,包括预设人体特征部位的目标图像区域104,该预设人体特征部位为人体的特征部位,例如为人体、人脸或者头肩部位等特征部位。进一步地,当射击装置采集到环境图p1后,可以调用预设的人体特征部位检测模型对p1进行图像区域识别,若识别结果指示p1中存在如图1c所示的目标图像区域104,则可以控制射击部禁止射击。例如禁止射击部发射子弹。采用本发明,可以结合图像对射击装置进行安全控制,从而避免射击部射击到人体的特征部位,有利于提高射击装置的安全性。
其中,图1b中的射击装置102仅为举例说明,本发明中射击装置的具体结构可以参见图1a所示,同时,图1b中的射击装置也仅为举例,在其他例子中,图1b所示的射击装置还可以挂载在竞技机器人、无人机等设备上,或者射击装置就是一个具有摄像装置和射击部的竞技机器人、无人机等设备。同时,图1b和图1c也仅为对本发明实施例所涉及的场景进行举例,主要用于说明本发明实施例的基于摄像装置和射击部来对射击装置进行安全的控制的部分图像识别与射击控制原理。
参见图2,图2是本发明实施例提供的一种基于图像识别的控制方法的流程示意图,本发明实施例的所述方法可以由射击装置来执行,该方法应用于射击装置,该射击装置包括射击部和摄像装置。
在图2所示的基于图像识别的控制方法中,射击装置在S201中调用摄像装置采集射击装置当前所处环境的环境图像。在一个实施例中,射击装置可以按照预设时间间隔通过摄像装置采集射击装置当前所处环境的环境图像。在一个实施例中,该预设时间间隔可以根据射击部的射击间隔时间进行设置。其中,该射击间隔时间为射击部上一次射击的结束时间与当前次射击的开始时间之间的时间段。在一个实施例中,该预设时间间隔可以设置为小于该射击间隔时间,以便于在下一次射击开始之前,禁止射击部进行射击处理。
射击装置调用摄像装置采集到当前所处环境的环境图像之后,可以在 S202中调用预设的人体特征部位检测模型对该环境图像进行图像区域识别。
在一个实施例中,上述人体特征部位检测模型可以包括头肩部检测模型。其中,在配置该头肩部检测模型时,可以根据样本图像集合中的训练图像和每一个训练图像的标注信息对初始检测模型进行训练,得到该头肩部检测模型。进一步地,可以将训练优化后的头肩部检测模型配置到射击装置中,用于提供包括头肩部的图像区域的检测。其中,该样本图像集合中包括多个在不同拍摄场景中采集的训练图像组,该训练图像组中的训练图像包括头肩部的图像区域,该标注信息包括对应训练图像中头肩部的图像位置信息。
在一个实施例中,头肩部对应图像区域可以为矩形区域。这种情况下,该训练图像中头肩部的图像位置信息可以包括头肩部对应图像区域的左上角和右下角在训练图像中的位置信息,该图像位置信息可以反映出头肩部对应图像区域的在训练图像中的具体位置,也可以反映出头肩部对应图像局域的大小。
如图3所示,该头肩部对应图像区域的左上角和右下角在训练图像中的位置信息可以分别为左上角的坐标信息和右下角的坐标信息,对人体特征部位检测模型进行检测的训练数据具体包括:训练图像p2、包括头肩部的图像区域304,304左上角对应的端点a,304左下角对应的端点b,a点的坐标为(90,300),b点的坐标为(300,100)。该a点和b点的坐标信息可以用于确定环境图像中包括头肩部的图像区域304,这种情况下,该a点和b点的坐标信息即为训练图像中头肩部的图像位置信息。
或者,上述头肩部对应图像区域为其它形状的区域。如该其他形状为n边形(n大于或者等于3的整数),那么,上述图像位置信息则可以为n边型对应的n个端点的坐标信息;如该其他形状为圆形,那么,上述图像位置信息则可以包括该圆形的原点的坐标信息以及该圆形的半径长度。本发明,对此不做具体限定。
在一个实施例中,在配置上述头肩部检测模型时,上述拍摄场景可以包括不同光线(如逆光或者正常光照)下不同姿态或者不同角度的头肩部的拍摄场景。在一个实施例中,可以预先采集在逆光拍摄场景下拍摄的侧面头肩部的多张训练图像,得到第一训练图像组;采集在逆光拍摄场景下拍摄的正面头肩部的多张训练图像,得到第二训练图像组;采集在逆光拍摄场景下拍摄的低头头 肩部的多张训练图像,得到第三训练图像组;采集在正常光照下拍摄的正面头肩部的多张训练图像,得到第四训练图像组;采集在正常光照下拍摄的侧面头肩部的多张训练图像,得到第五训练图像组;采集在正常光照下拍摄的背面头肩部的多张训练图像,得到第六训练图像组。进一步地,可以对各个训练图像组中的训练图像进行头肩部的图像位置信息的标注,也即对训练图像中包括头肩部的图像区域的图像位置信息的标注,得到每个训练图像对应的标注信息。进一步地,得到训练图像和训练图像对应的标注信息后,则可以根据大量的训练图像和每一个训练图像的标注信息对初始检测模型进行训练,得到可以用于检测头肩部图像区域的头肩部检测模型,并将训练优化后的头肩部检测模型配置到相应的射击装置中,用于提供头肩部图像区域的检测。其中,该初始检测模型例如可以为基于神经网络的物体检测模型,该神经网络例如可以为卷积神经网络。
在一个实施例中,参见图4a~图4d。其中,图4a为正常光照下采集到的包括侧面头肩部对应图像区域的训练图像p3,a1和b1的坐标为训练图像p3中头肩部的图像位置信息;图4b为正常光照下采集到的包括背面头肩部对应图像区域的训练图像p4,a2和b2的坐标为训练图像p4中头肩部的图像位置信息;图4c为逆光拍摄场景下采集到的包括正面头肩部对应图像区域的训练图像p5,a3和b3的坐标为训练图像p5中头肩部的图像位置信息;图4d为逆光拍摄场景下采集到的包括背面头肩部对应图像区域的训练图像p6,a3和b3的坐标为训练图像p6中头肩部的图像位置信息。
可以看出,上述头肩部检测模型是通过大量在不同拍摄场景(如逆光、正常光照、正面头肩部、侧面头肩部、低头头肩部和背面头肩部等)下采集到的包括头肩特征部位的训练图像对初始检测模型进行训练得到。一方面,由于采用了不同光线下不同角度的头肩特征部位的训练图像对初始检测模型进行训练,训练得到的头肩部检测模型对于输入图像对应的拍摄光线和输入图像对应人脸的拍摄角度均具有较高的鲁棒性,人的正脸、侧脸、低头以及后脑勺等情况均可以识别。另一方面,头肩特征部位的检测,相较于对人体的检测而言,头肩部检测模型对于检测的图像区域做了适当缩小,既保证了射击装置对人脸等关键部位的误伤,还有利于保证射击类玩具的可玩性。这是由于人体检测的 图像区域范围比较大,若对整个人体的射击均进行限制,严重影响了射击装置的可玩性,尤其是针对某些射击类玩具产品而言,即使射击到人体的腿部、肚子等部位,危害性也相对较小。
射击装置在S202中调用预设的人体特征部位检测模型对该环境图像进行图像区域识别之后,在S203中若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则对射击部进行禁止射击处理。其中,该预设人体特征部位为人体的特征部位。在一个实施例中,该预设人体特征部位可以包括头肩特征部位,除此之外,该预设人体特征部位还可以包括人体、人脸等特征部位。该预设人体特征部位与人体特征部位检测模型具有关联关系。在一个实施例中,若该人体特征部位检测模型可以用于检测包括人体的图像区域,那么该预设人体特征部位则可以包括人体;若该人体特征部位检测模型可以用于检测包括人脸的图像区域,那么该预设人体特征部位则可以包括人脸;若该人体特征部位检测模型可以用于检测包括头肩特征部位的图像区域,那么该预设人体特征部位则可以包括头肩特征部位。
在一个实施例中,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,射击装置则可以控制射击部禁止射击,例如禁止射击部发射子弹。采用这样的方式,可以当检测到环境图像中存在人时,主动限制射击装置中射击部的射击功能,有利于避免射击部射击到人体的特征部位。
本发明实施例中,射击装置可以调用摄像装置采集射击装置当前所处环境的环境图像,并调用预设的人体特征部位检测模型对该环境图像进行图像区域识别,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则禁止射击部进行射击处理。采用本发明,可以结合图像限制射击装置中射击部的射击功能,有利于避免射击部射击到人体的特征部位,提高射击装置的安全性。
参见图5,图5是本发明实施例提供的另一种基于图像识别的控制方法的流程示意图,本发明实施例的所述方法可以由射击装置来执行,该方法应用于射击装置,该射击装置包括射击部和摄像装置。
在图5所示的基于图像识别的控制方法中,射击装置可以在S501中调用摄像装置采集射击装置当前所处环境的环境图像,在S502中调用预设的人体 特征部位检测模型对该环境图像进行图像区域识别。其中,步骤S501~步骤S502的具体实施方式,可以参见上述实施例中步骤S201~步骤S202的相关描述,此处不再赘述。
射击装置调用预设的人体特征部位检测模型对该环境图像进行图像区域识别之后,在S503中若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,确定该目标图像区域和射击图像区域之间是否满足预设控制关系。其中,该射击图像区域为射击装置中射击部的射击估计区域在环境图像上所对应的图像区域。在一个实施例中,射击部和摄像装置均内置于射击装置中,两者具有联动关系,该射击图像区域可以根据射击部与摄像装置的安装关系确定的。具体地,该射击图像区域可以根据摄像装置相较于射击部的朝向、与射击部之间的安装距离等确定。
在一个实施例中,请参见图1b和图1c。图1b包括射击装置102,102包括摄像装置和射击部,摄像装置位于射击部的正上方,摄像装置与射击部之间的安装距离为d,射击装置可以调用摄像装置采集当前所处环境的环境图,采集得到的环境图p1可以如图1c所示,该环境图p1的尺寸为640*300。图1c中包括:射击图像区域103、采集得到的环境图p1的中心点O1、射击图像区域的中心点O2,其中,O1的坐标为(300,150),O2的坐标为(300,200)。可以看出,由于射击部位于摄像装置的正下方,那么,射击图像区域103也位于环境图像p1的正下方,且射击图像区域的中心点O2与环境图像区域的中心点O1的横坐标相同,O1与O2之间的距离d1与竖直安装距离d有关。或者,在另一个实施例中,当射击部位于摄像装置的正上方时,那么,射击图像区域则位于环境图像的正上方。可以看出,射击图像区域的位置会随着摄像装置相较于射击部的朝向、摄像装置与射击部之间的安装距离d的改变而改变。
射击装置确定该目标图像区域和射击图像区域之间是否满足预设控制关系之后,在S504中若确定出该目标图像区域和射击图像区域之间满足预设控制关系,则禁止射击部进行射击处理。
其中,上述预设控制关系可以包括目标图像区域和射击图像区域之间的重叠关系、目标图像区域和射击图像区域之间的距离、目标图像区域和射击图像区域之间的包含关系。
在一个实施例中,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,射击装置可以确定目标图像区域和射击图像区域之间的重叠度是否大于或者等于预设重叠度阈值;若是,则确定目标图像区域和射击图像区域之间满足预设控制关系。在一个实施例中,上述重叠度可以指目标图像区域和射击图像区域之间重叠面积占目标图像区域的比例,上述预设重叠度阈值可以为预先设置的比例阈值。在一个实施例中,若预先设置的比例阈值为30%,目标图像区域的面积为100。这种情况下,若射击装置检测到目标图像区域和射击图像区域之间重叠面积为50,则可以计算出目标图像区域和射击图像区域之间重叠面积占目标图像区域的比例为50%,该50%大于预先设置的比例阈值30%,则可以确定目标图像区域和射击图像区域之间满足预设控制关系。
在一个实施例中,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,射击装置可以确定目标图像区域和射击图像区域之间的距离是否小于或者等于预设距离;若是,则确定目标图像区域和射击图像区域之间满足预设控制关系。在一个实施例中,上述目标图像区域和射击图像区域之间的距离,可以为目标图像区域的中心与射击图像区域的中心之间的距离,上述预设距离则为预先设置的目标图像区域的中心与射击图像区域的中心之间的距离阈值。
在另一个实施例中,上述目标图像区域和射击图像区域之间的距离,还可以为目标图像区域的边线与射击图像区域的边线之间的最小边线距离,上述预设距离则为预先设置的目标图像区域与射击图像区域之间的最小边线阈值。
在一个实施例中,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,射击装置可以确定目标图像区域是否位于射击图像区域内,若是,则可以确定目标图像区域和射击图像区域之间满足预设控制关系。
或者,在另一个实施例中,射击装置可以确定射击图像区域是否位于目标图像区域内,若是,则确定目标图像区域和射击图像区域之间满足预设控制关系。
本发明实施例中,可以调用摄像装置采集射击装置当前所处环境的环境图像,并调用预设的人体特征部位检测模型对该环境图像进行图像区域识别,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,确定该 目标图像区域和射击图像区域之间是否满足预设控制关系,若确定出该目标图像区域和射击图像区域之间满足预设控制关系,则禁止射击部进行射击处理。采用本发明,一方面可以限制射击装置中射击部的射击功能,有利于避免射击部射击到人体的特征部位,提高射击装置的安全性,另一方面,预设人体特征部位的检测,不会使禁止射击的条件过于严格,导致射击装置的可玩性降低。
参见图7,图7是本发明实施例提供的又一种基于图像识别的控制方法的流程示意图,本发明实施例的所述方法可以由射击装置来执行,该方法应用于射击装置,该射击装置包括射击部和摄像装置。
在图7所示的基于图像识别的控制方法中,射击装置可以在S701中调用摄像装置采集射击装置当前所处环境的环境图像。其中,步骤S701的具体实施方式,可以参见上述实施例中步骤S201的相关描述,此处不再赘述。
射击装置调用摄像装置采集到射击装置当前所处环境的环境图像之后,可以在S702中从该环境图像中截取图像区域得到第一子图像,并调用预设的人体特征部位检测模型对第一子图像进行图像区域识别处理;还可以在S703中调用预设的人体特征部位检测模型对环境图像进行全图检测识别。
在一个实施例中,射击装置从环境图像中截取图像区域得到第一子图像后,可以将第一子图像放大至环境图像对应的目标图像尺寸,得到第一放大子图像,并调用预设的人体特征部位检测模型对该第一放大子图像进行图像区域识别处理。其中,该环境图像对应的目标图像尺寸为预先设置的,在一个实施例中,可以与训练人体特征部位检测模型时所采用的训练图像的尺寸相同,有利于提高人体特征部位检测模型图像识别的准确性;在另一个实施例中,也可以与环境图像的图像尺寸相同。
在一个实施例中,上述第一子图像可以是根据射击图像区域得到的。在一个实施例中,第一子图像可以为射击图像区域对应的图像。假设环境图像对应的目标图像尺寸为640*300,如图6a和6b所示,环境图像p7的尺寸为640*300,射击图像区域604的尺寸为210*180,604左上角的端点a5的坐标为(90,270),604右下角的端点b5的坐标为(300,90)。这种情况下,一方面射击装置可以根据射击图像区域的尺寸以及端点a5和端点b5的坐标从如图6a所示的环境图像中截取射击图像区域对应的图像,得到如图6b所示的第一子图像605, 也即,第一子图像605为射击图像区域604对应的图像。进一步地,可以将第一子图像的尺寸由210*180放大至640*300,得到第一放大子图像,并将该第一放大子图像输入预设的人体特征部位检测模型,调用该预设的人体特征部位检测模型对该第一放大子图像进行图像区域识别。另一方面,射击装置可以调用预设的人体特征部位检测模型对环境图像进行全图检测识别。
在一个实施例中,上述调用预设的人体特征部位检测模型对第一子图像进行图像区域识别(以下简称半图识别)的步骤S702和上述调用预设的人体特征部位检测模型对环境图像进行全图检测识别(以下简称全图识别)的步骤S703,是可以同步进行的。在同步执行的情况下,将调用预设的人体特征部位检测模型对第一子图像进行图像区域识别,得到的识别结果称为第一识别结果;将调用预设的人体特征部位检测模型对环境图像进行全图检测识别,得到的识别结果称为第二识别结果。这种情况下,若检测到第一识别结果和第二识别结果中任一个识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则停止上述半图识别和上述全图识别。
或者,在另一个实施例中,射击装置也可以先执行调用预设的人体特征部位检测模型对第一子图像进行图像区域识别的步骤S702,若识别到第一子图像中不存在包括预设人体特征部位的目标图像区域,也即识别到环境图像中不存在包括预设人体特征部位的目标图像区域,再执行上述调用预设的人体特征部位检测模型对环境图像进行全图检测识别的步骤S703。
射击装置在S704中若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则确定该目标图像区域和射击图像区域之间是否满足预设控制关系。其中,该识别结果可以包括对第一子图像进行图像区域识别的结果(即第一识别结果)和/或对环境图像进行全图检测识别的结果(即第二识别结果)。
在一个实施例中,若射击装置既调用预设的人体特征部位检测模型对第一子图像进行图像区域识别处理,又调用预设的人体特征部位检测模型对环境图像进行全图检测识别。这种情况下,若第一识别和第二识别结果中的任一个识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则可以执行确定该目标图像区域和射击图像区域之间是否满足预设控制关系的步骤。
射击装置在S704中确定该目标图像区域和射击图像区域之间是否满足预设控制关系之后,在S705中若确定出该目标图像区域和射击图像区域之间满足预设控制关系,则禁止射击部进行射击处理。其中,步骤S705的具体实施方式,可以参见上述实施例中步骤S504的相关描述,此处不再赘述。
本发明实施例中,射击装置可以调用摄像装置采集射击装置当前所处环境的环境图像,从该环境图像中截取图像区域得到第一子图像,并调用预设的人体特征部位检测模型对该第一子图像进行图像区域识别处理,调用预设的人体特征部位检测模型对环境图像进行全图检测识别,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,确定该目标图像区域和射击图像区域之间是否满足预设控制关系,并调用预设的人体特征部位检测模型对该环境图像进行图像区域识别,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则禁止射击部进行射击处理。采用本发明,一方面采用半图识别和全图识别交替检测人体特征部位,有利于提高人体特征部位的检测效率;另一方面,可以限制射击装置中射击部的射击功能,有利于避免射击部射击到人体特征部位,提高射击装置的安全性。
参见图8,图8是本发明实施例提供的又一种基于图像识别的控制方法的流程示意图,本发明实施例的所述方法可以由射击装置来执行,该方法应用于射击装置,该射击装置包括射击部、摄像装置和红外摄像装置。
在图8所示的基于图像识别的控制方法中,射击装置可以在S801中调用摄像装置采集射击装置当前所处环境的环境图像。其中,步骤S801的具体实施方式,可以参见上述实施例中步骤201的相关描述,此处不再赘述。
在射击装置采集到当前所处环境的环境图像后,可以在S802中从红外摄像装置拍摄的红外环境图像中确定出发热图像区域,并在S803中根据该发热图像区域从环境图像中截取图像区域得到第二子图像。
在一个实施例中,红外环境图像与环境图像的尺寸是相同的,射击装置从红外摄像装置拍摄的红外环境图像中确定出发热图像区域后,可以获取该发热图像区域在红外环境图像中的位置信息,进而根据该位置信息从环境图像中到与该发热图像区域面积相同的图像区域,得到第二子图像。
在一个实施例中,该发热图像区域可以为矩形区域,该发热图像区域在红 外环境图像中的位置信息可以为发热图像区域左上角的位置坐标和右下角的位置坐标。在一个实施例中,假设红外环境图像与环境图像的尺寸均为640*360,该发热图像区域左上角的端点a6位置坐标为(90,300),该发热图像区域右下角的端点b6位置坐标为(300,100)。这种情况下,射击装置可以根据端点a6和端点b6在红外环境图像中的位置坐标,在环境图像中确定出第二子图像,该第二子图像在环境图像中左上角端点a7与a6的坐标相同,在环境图像中右下角端点b7与b6的坐标相同。
射击装置在得到第二子图像后,可以在S804中调用预设的人体特征部位测模型对第二子图像进行图像区域识别处理。
在一个实施例中,射击装置根据该发热图像区域从环境图像中截取图像区域得到第二子图像后,还可以将第二子图像放大至环境图像对应的目标图像尺寸,得到第二放大子图像,并调用预设的人体特征部位检测模型对第二放大子图像进行图像区域识别处理。其中,该环境图像对应的目标图像尺寸为预先设置的,在一个实施例中,可以与训练人体特征部位检测模型时所采用的训练图像的尺寸相同,有利于提高人体特征部位检测模型图像识别的准确性;在另一个实施例中,也可以与环境图像的图像尺寸相同。
射击装置调用预设的人体特征部位测模型对第二子图像进行图像区域识别处理后,在S805中若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则对射击部进行禁止射击处理。
在一个实施例中,射击装置对射击部进行禁止射击处理之前,还可以确定出环境图像中预设人体特征部位对应的人体与射击装置之间的距离,当该距离小于或者等于射击装置的预设最大射程距离时,则禁止射击部进行射击处理。
在一个实施例中,射击装置可以根据目标图像区域占环境图像的比例,计算出环境图像中预设人体特征部位对应的人体与所述射击装置之间的距离。具体实施中,在拍摄过程中,射击装置可获取摄像装置的焦距和当前的对焦距离,当摄像装置的焦距和对焦距离确定时,环境图像中预设人体特征部位对应的人体在画面中所占的比例越大,表示该预设人体特征部位对应的人体与摄像装置之间的距离越短。当确定了目标图像区域占环境图像的比例之后,则可以根据上述比例确定预设人体特征部位对应的人体与射击装置之间的距离。
在另一个实施例中,该射击装置还可以预先配置有距离传感器。这种情况下,射击装置可以获取距离传感器的感测数据,并根据感测数据得到环境图像中预设人体特征部位对应的人体(以下简称目标人体)与射击装置之间的距离。其中,该距离传感器可以为一种红外测距传感器,可以感测当前环境中障碍物与射击装置之间的距离。由于射击装置执行步骤S805已确定出指示环境图像中存在包括预设人体特征部位的目标图像区域,这种情况下,则可以将环境图像中预设人体特征部位对应的人体视为该障碍物中的一种。
其中,红外测距传感器的测距基本原理为发光管发出红外光,光敏接收管接收前方物体反射光,据此判断前方是否有障碍物。根据反射光的强弱可以判断物体的距离,它的原理是接收管接收的光强随反射物体的距离而变化的,距离近则反射光强,距离远则反射光弱。在一个实施例中,红外测距传感器和摄像装置均被设置于射击装置中,红外测距传感器可以感测当前所处环境中多种障碍物距离射击装置的距离,也即红外测距传感器的感测数据中包括多种障碍物各自对应的距离数据。这种情况下,射击装置可以根据红外测距传感器和摄像机的标定关系,从红外测距传感器针对多种障碍物的感测数据中提取出目标人体的距离数据。进一步地,射击装置可以根据目标人体的距离数据得到该目标人体与射击装置之间的距离。
在一个实施例中,红外测距传感器的感测数据可以为当前所处环境对应的一种距离图像,即深度图。该深度图包括多个像素点,每个像素点的值可以表征该点对应的障碍物与红外测距传感器之间的距离,也即每个像素点可以对应一个距离数据。这种情况下,射击装置可以获取目标图像区域在环境图像中的图像位置信息,并根据该图像位置信息在深度图确定出该目标图像区域对应的深度区域,进而根据红外测距传感器和摄像机的标定关系对该深度区域进行调整,得到目标深度区域。进一步地,射击装置可以从深度图对应的所有像素点中将位于目标深度区域范围内的像素点确定为目标像素点,并获取目标像素点的距离数据,进而根据目标像素点的距离数据可以得到目标人体与射击装置之间的距离。其中,该目标像素点的距离数据即为目标人体的距离数据。或者,当目标像素点存在多个时,可以对所有目标像素点的距离数据求平均,该求平均得到的距离数据则为上述目标人体的距离数据。
其中,上述标定关系可以表征红外测距传感器和摄像机的安装位置,以及安装距离l 1。在一个实施例中,若红外测距传感器位于摄像机的正上方距离l 1(l 1大于0)的位置,那么,射击装置则可以将深度区域在深度图中上移k 1*l 1,上移后得到的区域则为目标深度区域,该k 1大于0,其具体数值可以预先设置。又或者,若红外测距传感器位于摄像机的正下方距离l 1的位置,那么,射击装置则可以将深度区域在深度图中下移k 1*l 1,下移后得到的区域则为目标深度区域。
本发明实施例中,射击装置可以调用摄像装置采集射击装置当前所处环境的环境图像,从红外摄像装置拍摄的红外环境图像中确定出发热图像区域,根据该发热图像区域从环境图像中截取图像区域得到第二子图像,并调用预设的人体特征部位测模型对第二子图像进行图像区域识别,若识别结果指示环境图像中存在包括预设人体特征部位的目标图像区域,则对射击部进行禁止射击处理。采用本发明,一方面,有利于提高人体特征部位的检测效率;另一方面,可以限制射击装置中射击部的射击功能,有利于避免射击部射击到人体特征部位,提高射击装置的安全性。
基于上述方法实施例的描述,在一个实施例中,本发明实施例还提供了一种如图9所示的基于图像识别的控制装置,所述控制装置配置于射击装置,所述射击装置包括射击部和摄像装置,所述控制装置包括:
采集模块90,用于调用所述摄像装置采集所述射击装置当前所处环境的环境图像;
处理模块91,用于调用预设的人体特征部位检测模型对所述采集模块90采集到的所述环境图像进行图像区域识别;
处理模块91,还用于若识别结果指示所述环境图像中存在包括预设人体特征部位的目标图像区域,则对所述射击部进行禁止射击处理。
在一个实施例中,所述处理模块91,还用于根据样本图像集合中的训练图像和每一个训练图像的标注信息对初始检测模型进行训练,得到所述人体特征部位检测模型;其中,所述样本图像集合中包括多个在不同拍摄场景中采集的训练图像组,所述训练图像组中的训练图像包括头肩部的图像区域,所述标注信息包括对应训练图像中头肩部的图像位置信息。
在一个实施例中,所述处理模块91,还用于确定所述目标图像区域和射击图像区域之间是否满足预设控制关系,所述射击图像区域为所述射击部的射击估计区域在所述环境图像上所对应的图像区域;若是,则禁止所述射击部进行射击处理。
在一个实施例中,所述处理模块91,还用于若所述目标图像区域和射击图像区域之间的重叠度大于或者等于预设重叠度阈值,则确定所述目标图像区域和射击图像区域之间满足预设控制关系;或者,若所述目标图像区域和射击图像区域之间的距离小于或者等于预设距离阈值,则确定所述目标图像区域和射击图像区域之间满足预设控制关系;或者,若所述目标图像区域位于所述射击图像区域内或者所述射击图像区域位于所述目标图像区域内,则确定所述目标图像区域和射击图像区域之间满足预设控制关系。
在一个实施例中,所述处理模块91,还用于确定出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离;当所述距离小于或者等于所述射击装置的预设最大射程距离时,触发执行所述对所述射击部进行禁止射击处理的步骤。
在一个实施例中,所述处理模块91,还用于根据所述目标图像区域占所述环境图像的比例,计算出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离;或者,获取距离传感器的感测数据,并根据所述感测数据得到所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离。
在一个实施例中,所述处理模块91,还用于从所述环境图像中截取图像区域得到第一子图像,并调用预设的人体特征部位检测模型对所述第一子图像进行图像区域识别处理;和/或,调用预设的人体特征部位检测模型对所述环境图像进行全图检测识别。
在一个实施例中,所述处理模块91,还用于将所述第一子图像放大至所述环境图像对应的目标图像尺寸,得到第一放大子图像;调用预设的人体特征部位检测模型对所述第一放大子图像进行图像区域识别处理。
在一个实施例中,所述第一子图像是根据所述射击图像区域得到的。
在一个实施例中,所述射击装置还包括红外摄像装置,所述处理模块91,还用于从所述红外摄像装置拍摄的红外环境图像中确定出发热图像区域;根据 所述发热图像区域从所述环境图像中截取图像区域得到第二子图像;调用预设的人体特征部位检测模型对所述第二子图像进行图像区域识别处理。
在一个实施例中,所述处理模块91,还用于将所述第二子图像放大至所述环境图像对应的目标图像尺寸,得到第二放大子图像;调用预设的人体特征部位检测模型对所述第二放大子图像进行图像区域识别处理。
在一个实施中,所述预设人体特征部位包括头肩特征部位。
在本发明实施例中,上述各个模块的具体实现可参考前述附图2、图5、图7或者图8所对应的实施例中相关内容的描述。
请参见图10,是本发明实施例提供的一种控制设备的结构示意性框图,所述控制设备配置于射击装置,所述射击装置包括射击部和摄像装置,所述控制设备可包括控制器10、通信接口11和存储器12,控制器10、通信接口11和存储器12通过总线相连接,所述存储器12用于存储程序指令和图像数据(如环境图像)。
所述存储器12可以包括易失性存储器(volatile memory),如随机存取存储器(random-access memory,RAM);存储器12也可以包括非易失性存储器(non-volatile memory),如快闪存储器(flash memory),固态硬盘(solid-state drive,SSD)等;存储器12也可以是双倍速率同步动态随机存储器(Double Data Rate SDRAM,DDR);存储器12还可以包括上述种类的存储器的组合。
本发明实施例中,所述存储器12用于存储计算机程序,所述计算机程序包括程序指令,所述控制器10被配置用于调用所述程序指令时执行:调用所述摄像装置采集所述射击装置当前所处环境的环境图像;调用预设的人体特征部位检测模型对所述环境图像进行图像区域识别;若识别结果指示所述环境图像中存在包括预设人体特征部位的目标图像区域,则对所述射击部进行禁止射击处理。
在一个实施例中,所述控制器10,还用于确定所述目标图像区域和射击图像区域之间是否满足预设控制关系,所述射击图像区域为所述射击部的射击估计区域在所述环境图像上所对应的图像区域;若是,则禁止所述射击部进行射击处理。
在一个实施例中,所述控制器10,还用于若所述目标图像区域和射击图 像区域之间的重叠度大于或者等于预设重叠度阈值,则确定所述目标图像区域和射击图像区域之间满足预设控制关系;或者,若所述目标图像区域和射击图像区域之间的距离小于或者等于预设距离阈值,则确定所述目标图像区域和射击图像区域之间满足预设控制关系;或者,若所述目标图像区域位于所述射击图像区域内或者所述射击图像区域位于所述目标图像区域内,则确定所述目标图像区域和射击图像区域之间满足预设控制关系。
在一个实施例中,所述控制器10,还用于确定出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离;当所述距离小于或者等于所述射击装置的预设最大射程距离时,触发执行所述对所述射击部进行禁止射击处理的步骤。
在一个实施例中,所述控制器10,还用于根据所述目标图像区域占所述环境图像的比例,计算出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离;或者,获取距离传感器的感测数据,并根据所述感测数据得到所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离。
在一个实施例中,所述控制器10,还用于从所述环境图像中截取图像区域得到第一子图像,并调用预设的人体特征部位检测模型对所述第一子图像进行图像区域识别处理;和/或,调用预设的人体特征部位检测模型对所述环境图像进行全图检测识别。
在一个实施例中,所述控制器10,还用于将所述第一子图像放大至所述环境图像对应的目标图像尺寸,得到第一放大子图像;调用预设的人体特征部位检测模型对所述第一放大子图像进行图像区域识别处理。
在一个实施例中,所述第一子图像是根据所述射击图像区域得到的。
在一个实施例中,所述射击装置还包括红外摄像装置,所述控制器10,还用于从所述红外摄像装置拍摄的红外环境图像中确定出发热图像区域;根据所述发热图像区域从所述环境图像中截取图像区域得到第二子图像;调用预设的人体特征部位检测模型对所述第二子图像进行图像区域识别处理。
在一个实施例中,所述控制器10,还用于将所述第二子图像放大至所述环境图像对应的目标图像尺寸,得到第二放大子图像;调用预设的人体特征部位检测模型对所述第二放大子图像进行图像区域识别处理。
在一个实施例中,所述预设人体特征部位包括头肩特征部位。
在本发明实施例中,上述控制器10的具体实现可参考前述附图2、图5、图7或者图8所对应的实施例中相关内容的描述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明的部分实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (25)

  1. 一种基于图像识别的控制方法,其特征在于,所述方法应用于射击装置,所述射击装置包括射击部和摄像装置,该方法包括:
    调用所述摄像装置采集所述射击装置当前所处环境的环境图像;
    调用预设的人体特征部位检测模型对所述环境图像进行图像区域识别;
    若识别结果指示所述环境图像中存在包括预设人体特征部位的目标图像区域,则对所述射击部进行禁止射击处理。
  2. 根据权利要求1所述的方法,其特征在于,所述调用预设的人体特征部位检测模型对所述环境图像进行图像区域识别之前,所述方法还包括:
    根据样本图像集合中的训练图像和每一个训练图像的标注信息对初始检测模型进行训练,得到所述人体特征部位检测模型;
    其中,所述样本图像集合中包括多个在不同拍摄场景中采集的训练图像组,所述训练图像组中的训练图像包括头肩部的图像区域,所述标注信息包括对应训练图像中头肩部的图像位置信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述对所述射击部进行禁止射击处理,包括:
    确定所述目标图像区域和射击图像区域之间是否满足预设控制关系,所述射击图像区域为所述射击部的射击估计区域在所述环境图像上所对应的图像区域;
    若是,则禁止所述射击部进行射击处理。
  4. 根据权利要求3所述的方法,其特征在于,所述确定所述目标图像区域和射击图像区域之间是否满足预设控制关系,包括:
    若所述目标图像区域和射击图像区域之间的重叠度大于或者等于预设重叠度阈值,则确定所述目标图像区域和射击图像区域之间满足预设控制关系; 或者,
    若所述目标图像区域和射击图像区域之间的距离小于或者等于预设距离阈值,则确定所述目标图像区域和射击图像区域之间满足预设控制关系;或者,若所述目标图像区域位于所述射击图像区域内或者所述射击图像区域位于所述目标图像区域内,则确定所述目标图像区域和射击图像区域之间满足预设控制关系。
  5. 根据权利要求1所述的方法,其特征在于,所述对所述射击部进行禁止射击处理之前,所述方法还包括:
    确定出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离;
    当所述距离小于或者等于所述射击装置的预设最大射程距离时,触发执行所述对所述射击部进行禁止射击处理的步骤。
  6. 根据权利要求5所述的方法,其特征在于,所述确定出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离,包括:
    根据所述目标图像区域占所述环境图像的比例,计算出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离;或者,
    获取距离传感器的感测数据,并根据所述感测数据得到所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离。
  7. 根据权利要求3所述的方法,其特征在于,所述调用预设的人体特征部位检测模型对所述环境图像进行图像区域识别,包括:
    从所述环境图像中截取图像区域得到第一子图像,并调用预设的人体特征部位检测模型对所述第一子图像进行图像区域识别处理;和/或,
    调用预设的人体特征部位检测模型对所述环境图像进行全图检测识别。
  8. 根据权利要求7所述的方法,其特征在于,所述调用预设的人体特征部位检测模型对所述第一子图像进行图像区域识别处理,包括:
    将所述第一子图像放大至所述环境图像对应的目标图像尺寸,得到第一放大子图像;
    调用预设的人体特征部位检测模型对所述第一放大子图像进行图像区域识别处理。
  9. 根据权利要求7或8所述的方法,其特征在于,所述第一子图像是根据所述射击图像区域得到的。
  10. 根据权利要求1所述的方法,其特征在于,所述射击装置还包括红外摄像装置,所述调用预设的人体特征部位检测模型对所述环境图像进行图像区域识别,包括:
    从所述红外摄像装置拍摄的红外环境图像中确定出发热图像区域;
    根据所述发热图像区域从所述环境图像中截取图像区域得到第二子图像;
    调用预设的人体特征部位检测模型对所述第二子图像进行图像区域识别处理。
  11. 根据权利要求10所述的方法,其特征在于,所述调用预设的人体特征部位检测模型对所述第二子图像进行图像区域识别处理,包括:
    将所述第二子图像放大至所述环境图像对应的目标图像尺寸,得到第二放大子图像;
    调用预设的人体特征部位检测模型对所述第二放大子图像进行图像区域识别处理。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述预设人体特征部位包括头肩特征部位。
  13. 一种基于图像识别的控制装置,其特征在于,所述控制装置配置于射击装置,所述射击装置包括射击部和摄像装置,所述控制装置包括:
    采集模块,用于调用所述摄像装置采集所述射击装置当前所处环境的环境 图像;
    处理模块,用于调用预设的人体特征部位检测模型对所述采集模块采集到的所述环境图像进行图像区域识别;
    处理模块,还用于若识别结果指示所述环境图像中存在包括预设人体特征部位的目标图像区域,则对所述射击部进行禁止射击处理。
  14. 一种控制设备,其特征在于,所述控制设备配置于射击装置,所述射击装置包括射击部和摄像装置,所述控制设备包括控制器和通信接口,所述控制器用于:
    调用所述摄像装置采集所述射击装置当前所处环境的环境图像;
    调用预设的人体特征部位检测模型对所述环境图像进行图像区域识别;
    若识别结果指示所述环境图像中存在包括预设人体特征部位的目标图像区域,则对所述射击部进行禁止射击处理。
  15. 根据权利要求14所述的控制设备,其特征在于,所述控制器,具体用于确定所述目标图像区域和射击图像区域之间是否满足预设控制关系,所述射击图像区域为所述射击部的射击估计区域在所述环境图像上所对应的图像区域;若是,则禁止所述射击部进行射击处理。
  16. 根据权利要求15所述的控制设备,其特征在于,所述控制器,具体用于若所述目标图像区域和射击图像区域之间的重叠度大于或者等于预设重叠度阈值,则确定所述目标图像区域和射击图像区域之间满足预设控制关系;或者,若所述目标图像区域和射击图像区域之间的距离小于或者等于预设距离阈值,则确定所述目标图像区域和射击图像区域之间满足预设控制关系;或者,若所述目标图像区域位于所述射击图像区域内或者所述射击图像区域位于所述目标图像区域内,则确定所述目标图像区域和射击图像区域之间满足预设控制关系。
  17. 根据权利要求14所述的控制设备,其特征在于,所述控制器,还用 于确定出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离;当所述距离小于或者等于所述射击装置的预设最大射程距离时,触发执行所述对所述射击部进行禁止射击处理。
  18. 根据权利要求17所述的控制设备,其特征在于,所述控制器,具体用于根据所述目标图像区域占所述环境图像的比例,计算出所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离;或者,获取距离传感器的感测数据,并根据所述感测数据得到所述环境图像中预设人体特征部位对应的人体与所述射击装置的距离。
  19. 根据权利要求15所述的控制设备,其特征在于,所述控制器,具体用于从所述环境图像中截取图像区域得到第一子图像,并调用预设的人体特征部位检测模型对所述第一子图像进行图像区域识别处理;和/或,调用预设的人体特征部位检测模型对所述环境图像进行全图检测识别。
  20. 根据权利要求19所述的控制设备,其特征在于,所述控制器,具体用于将所述第一子图像放大至所述环境图像对应的目标图像尺寸,得到第一放大子图像;调用预设的人体特征部位检测模型对所述第一放大子图像进行图像区域识别处理。
  21. 根据权利要求19或20所述的控制设备,其特征在于,所述第一子图像是根据所述射击图像区域得到的。
  22. 根据权利要求14所述的控制设备,其特征在于,所述射击装置还包括红外摄像装置,所述控制器,具体用于从所述红外摄像装置拍摄的红外环境图像中确定出发热图像区域;根据所述发热图像区域从所述环境图像中截取图像区域得到第二子图像;调用预设的人体特征部位检测模型对所述第二子图像进行图像区域识别处理。
  23. 根据权利要求22所述的控制设备,其特征在于,所述控制器,具体用于将所述第二子图像放大至所述环境图像对应的目标图像尺寸,得到第二放大子图像;调用预设的人体特征部位检测模型对所述第二放大子图像进行图像区域识别处理。
  24. 根据权利要求14-23任一项所述的控制设备,其特征在于,所述预设人体特征部位包括头肩特征部位。
  25. 一种计算机存储介质,其特征在于,该计算机存储介质中存储有程序指令,该程序指令被执行时,用于实现如权利要求1-12任一项所述的方法。
PCT/CN2018/113160 2018-10-31 2018-10-31 一种基于图像识别的控制方法、装置及控制设备 WO2020087383A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/113160 WO2020087383A1 (zh) 2018-10-31 2018-10-31 一种基于图像识别的控制方法、装置及控制设备
CN201880037903.3A CN110770739A (zh) 2018-10-31 2018-10-31 一种基于图像识别的控制方法、装置及控制设备
US17/242,277 US20210248362A1 (en) 2018-10-31 2021-04-27 Image-recognition-based control method and apparatus, and control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/113160 WO2020087383A1 (zh) 2018-10-31 2018-10-31 一种基于图像识别的控制方法、装置及控制设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/242,277 Continuation US20210248362A1 (en) 2018-10-31 2021-04-27 Image-recognition-based control method and apparatus, and control device

Publications (1)

Publication Number Publication Date
WO2020087383A1 true WO2020087383A1 (zh) 2020-05-07

Family

ID=69328777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113160 WO2020087383A1 (zh) 2018-10-31 2018-10-31 一种基于图像识别的控制方法、装置及控制设备

Country Status (3)

Country Link
US (1) US20210248362A1 (zh)
CN (1) CN110770739A (zh)
WO (1) WO2020087383A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111916219A (zh) * 2020-07-17 2020-11-10 深圳中集智能科技有限公司 检验检疫智能安全预警方法、装置及电子系统
CN116403284A (zh) * 2023-04-07 2023-07-07 北京奥康达体育产业股份有限公司 一种基于蓝牙传输技术的智慧跑步考核训练系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12066263B2 (en) * 2020-06-10 2024-08-20 Brett C. Bilbrey Human transported automatic weapon subsystem with human-non-human target recognition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050117779A1 (en) * 2003-11-27 2005-06-02 Konica Minolta Holdings, Inc. Object detection apparatus, object detection method and computer program product
CN101406390A (zh) * 2007-10-10 2009-04-15 三星电子株式会社 检测人体部位和人的方法和设备以及对象检测方法和设备
WO2012172182A1 (en) * 2011-06-16 2012-12-20 Sako Ltd Safety device of a gun and method for using safety device
CN103455799A (zh) * 2013-09-06 2013-12-18 西安邮电大学 一种人体头肩检测方法及其装置
US20160169605A1 (en) * 2015-10-14 2016-06-16 Timothy M Courtot Fire restrainig device for selective intelligent firing
CN106123675A (zh) * 2016-06-27 2016-11-16 何镜连 不杀生的枪
CN108038469A (zh) * 2017-12-27 2018-05-15 百度在线网络技术(北京)有限公司 用于检测人体的方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108066980A (zh) * 2016-11-12 2018-05-25 金德奎 一种基于位置和图像识别的ar或mr游戏方法
CN106686308B (zh) * 2016-12-28 2018-02-16 平安科技(深圳)有限公司 图像焦距检测方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050117779A1 (en) * 2003-11-27 2005-06-02 Konica Minolta Holdings, Inc. Object detection apparatus, object detection method and computer program product
CN101406390A (zh) * 2007-10-10 2009-04-15 三星电子株式会社 检测人体部位和人的方法和设备以及对象检测方法和设备
WO2012172182A1 (en) * 2011-06-16 2012-12-20 Sako Ltd Safety device of a gun and method for using safety device
CN103455799A (zh) * 2013-09-06 2013-12-18 西安邮电大学 一种人体头肩检测方法及其装置
US20160169605A1 (en) * 2015-10-14 2016-06-16 Timothy M Courtot Fire restrainig device for selective intelligent firing
CN106123675A (zh) * 2016-06-27 2016-11-16 何镜连 不杀生的枪
CN108038469A (zh) * 2017-12-27 2018-05-15 百度在线网络技术(北京)有限公司 用于检测人体的方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111916219A (zh) * 2020-07-17 2020-11-10 深圳中集智能科技有限公司 检验检疫智能安全预警方法、装置及电子系统
CN116403284A (zh) * 2023-04-07 2023-07-07 北京奥康达体育产业股份有限公司 一种基于蓝牙传输技术的智慧跑步考核训练系统
CN116403284B (zh) * 2023-04-07 2023-09-12 北京奥康达体育产业股份有限公司 一种基于蓝牙传输技术的智慧跑步考核训练系统

Also Published As

Publication number Publication date
CN110770739A (zh) 2020-02-07
US20210248362A1 (en) 2021-08-12

Similar Documents

Publication Publication Date Title
US20210248362A1 (en) Image-recognition-based control method and apparatus, and control device
WO2019037088A1 (zh) 一种曝光的控制方法、装置以及无人机
US11537696B2 (en) Method and apparatus for turning on screen, mobile terminal and storage medium
US9986155B2 (en) Image capturing method, panorama image generating method and electronic apparatus
WO2020237565A1 (zh) 一种目标追踪方法、装置、可移动平台及存储介质
US9001219B2 (en) Image processing apparatus configured to detect object included in image and method therefor
US20190265029A1 (en) Depth measuring method and system
CN108965721A (zh) 摄像头模组的控制方法和装置、电子设备
TWI709110B (zh) 攝像頭校準方法和裝置、電子設備
US20190149787A1 (en) Projection system and image projection method
WO2019179364A1 (zh) 拍摄方法、装置和智能设备
US20130235227A1 (en) Image capturing device and method thereof and human recognition photograph system
CN105915803B (zh) 一种基于传感器的拍照方法及系统
CN111598065B (zh) 深度图像获取方法及活体识别方法、设备、电路和介质
CN110351543A (zh) 适应性红外线投影控制的方法以及装置
WO2023072030A1 (zh) 镜头自动对焦方法及装置、电子设备和计算机可读存储介质
WO2020010620A1 (zh) 波浪识别方法、装置、计算机可读存储介质和无人飞行器
TWI749370B (zh) 臉部辨識方法及其相關電腦系統
CN104065949B (zh) 一种电视虚拟触控方法及系统
CN108227923A (zh) 一种基于体感技术的虚拟触控系统和方法
WO2019196793A1 (zh) 图像处理方法及装置、电子设备和计算机可读存储介质
EP4297395A1 (en) Photographing exposure method and apparatus for self-walking device
CN113711229A (zh) 电子设备的控制方法、电子设备和计算机可读存储介质
WO2022021093A1 (zh) 拍摄方法、拍摄装置及存储介质
CN113286084A (zh) 终端的图像采集方法及装置、存储介质、终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18939124

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18939124

Country of ref document: EP

Kind code of ref document: A1