CN115810203B - Obstacle avoidance recognition method, system, electronic equipment and storage medium - Google Patents

Obstacle avoidance recognition method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN115810203B
CN115810203B CN202211634663.XA CN202211634663A CN115810203B CN 115810203 B CN115810203 B CN 115810203B CN 202211634663 A CN202211634663 A CN 202211634663A CN 115810203 B CN115810203 B CN 115810203B
Authority
CN
China
Prior art keywords
image
image set
identified
obstacle
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211634663.XA
Other languages
Chinese (zh)
Other versions
CN115810203A (en
Inventor
陆赞信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iMusic Culture and Technology Co Ltd
Original Assignee
iMusic Culture and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iMusic Culture and Technology Co Ltd filed Critical iMusic Culture and Technology Co Ltd
Priority to CN202211634663.XA priority Critical patent/CN115810203B/en
Publication of CN115810203A publication Critical patent/CN115810203A/en
Application granted granted Critical
Publication of CN115810203B publication Critical patent/CN115810203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an obstacle avoidance recognition method, an obstacle avoidance recognition system, electronic equipment and a storage medium, wherein the method comprises the following steps: carrying out framing treatment on the input video to be identified to obtain an image set to be identified; preprocessing the image set to be identified through a pre-trained human body region marking model to obtain a preprocessed image set; inputting the preprocessed image set into a pre-trained key point recognition model for key point labeling treatment to obtain a labeled image set, wherein the key points comprise nose tips, shoulder joint points and hip joint points of a human body; acquiring the appearance position and appearance time of the obstacle; and carrying out avoidance recognition processing on the marked image set according to the appearance position and the appearance time of the obstacle to obtain an obstacle avoidance recognition result. The embodiment of the invention can reduce the key points to be identified, thereby improving the processing efficiency of avoidance identification, and can be widely applied to the technical field of artificial intelligence.

Description

Obstacle avoidance recognition method, system, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a system for identifying obstacle avoidance, electronic equipment and a storage medium.
Background
With the rapid development of artificial intelligence technology, the man-machine interface of man-machine interaction game is gradually developed from the way of controlling the movement of a game character by a keyboard, controlling aiming and shooting actions by a mouse, and the like to the way of controlling the movement or starting skills of the game character by a microphone, and acquiring the actions of a user by a camera or a sensor so as to control the movement of any character, and the like. In a camera-based man-machine interaction obstacle avoidance game, it is necessary to determine whether a user has successfully avoided an obstacle. However, the avoidance recognition method in the related art has the defects of more key points of human bones to be recognized, large calculated amount, complex calculation, multiple steps and low calculation efficiency. In view of the foregoing, there is a need for solving the technical problems in the related art.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method, a system, an electronic device, and a storage medium for identifying obstacle avoidance, so as to improve the processing efficiency of avoidance identification.
In one aspect, the present invention provides a method of obstacle avoidance recognition, the method comprising:
carrying out framing treatment on the input video to be identified to obtain an image set to be identified;
preprocessing the image set to be identified through a pre-trained human body region marking model to obtain a preprocessed image set;
Inputting the preprocessed image set into a pre-trained key point recognition model for key point labeling treatment to obtain a labeled image set, wherein the key points comprise nose tips, shoulder joint points and hip joint points of a human body;
Acquiring the appearance position and appearance time of the obstacle;
and carrying out avoidance recognition processing on the marked image set according to the appearance position and the appearance time of the obstacle to obtain an obstacle avoidance recognition result.
Optionally, the framing processing is performed on the input video to be identified to obtain an image set to be identified, including:
carrying out video division processing on an input video to be identified to obtain a plurality of video segments;
and extracting each video segment according to the preset frame rate to obtain an image set to be identified.
Optionally, before the preprocessing the image set to be identified by the pre-trained body area marking model to obtain a preprocessed image set, training the body area marking model is included, and the steps include:
Acquiring a human body image training set;
carrying out marking frame selection processing on each image in the human body image training set to obtain a marked image set;
Inputting the marked image set into the human body region marking model to obtain a marking result;
determining a trained loss value according to the marking result and the marking of the marking image set;
And updating parameters of the human body region marking model according to the loss value.
Optionally, the preprocessing the image set to be identified through a pre-trained human body region marking model to obtain a preprocessed image set, including:
inputting each frame of image in the image set to be identified into the human body region marking model to obtain an image marking frame set;
Cutting the image set to be identified according to the image marking frame set to obtain a cut image set;
performing thermodynamic diagram generation processing on the cutting image set to obtain a thermodynamic image set;
determining the set of cut images and the set of thermal images as a set of preprocessed images.
Optionally, before the preprocessing image set is input into a pre-trained key point recognition model to perform key point labeling processing, training the key point recognition model is included before the labeled image set is obtained, and the steps include:
Acquiring an identification image training set, wherein the identification image training set comprises a cutting training image and a thermal training image;
inputting the recognition image training set into the key point recognition model, and recognizing the nose tip, the shoulder joint point and the hip joint point of the human body in the recognition image training set to obtain a key point recognition result;
determining a training loss value according to the key point identification result and the label of the identification image training set;
And updating parameters of the key point identification model according to the loss value.
Optionally, the performing avoidance recognition processing on the labeling image set according to the appearance position and appearance time of the obstacle to obtain an obstacle avoidance recognition result, including:
Selecting an image of the human body in a neutral position from the labeling image set as a reference image;
Acquiring a marked key point according to the identification result of the reference image, and determining a reference vertical line according to the marked key point;
obtaining an image to be judged from the marked image set according to the occurrence time of the obstacle;
And carrying out avoidance judgment according to the appearance position of the obstacle, the reference vertical line and the key point identification result of the image to be judged, and obtaining an obstacle avoidance identification result.
On the other hand, the embodiment of the invention also provides an obstacle avoidance recognition system, which comprises: the first module is used for framing the input video to be identified to obtain an image set to be identified;
the second module is used for preprocessing the image set to be identified through a pre-trained human body region marking model to obtain a preprocessed image set;
The third module is used for inputting the preprocessed image set into a pre-trained key point recognition model to carry out key point labeling processing to obtain a labeled image set;
a fourth module for acquiring the appearance position and appearance time of the obstacle;
And a fifth module, configured to perform avoidance recognition processing on the labeled image set according to the occurrence position and occurrence time of the obstacle, so as to obtain an obstacle avoidance recognition result.
Optionally, the first module is configured to perform framing processing on an input video to be identified to obtain an image set to be identified, and includes:
The first unit is used for carrying out video division processing on the input video to be identified to obtain a plurality of video segments;
and the second unit is used for extracting each video segment according to the preset frame rate to obtain an image set to be identified.
On the other hand, the embodiment of the invention also discloses electronic equipment, which comprises a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
In another aspect, embodiments of the present invention also disclose a computer readable storage medium storing a program for execution by a processor to implement a method as described above.
In another aspect, embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
Compared with the prior art, the technical scheme provided by the invention has the following technical effects: carrying out framing treatment on the input video to be identified to obtain an image set to be identified; preprocessing the image set to be identified through a pre-trained human body region marking model to obtain a preprocessed image set; inputting the preprocessed image set into a pre-trained key point recognition model for key point labeling treatment to obtain a labeled image set, wherein the key points comprise nose tips, shoulder joint points and hip joint points of a human body; acquiring the appearance position and appearance time of the obstacle; and carrying out avoidance recognition processing on the marked image set according to the appearance position and the appearance time of the obstacle to obtain an obstacle avoidance recognition result. According to the embodiment of the invention, the avoidance recognition processing is carried out through the human body region marking model and the key point recognition model, so that the key points to be recognized are reduced, and the processing efficiency of the avoidance recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an obstacle avoidance recognition method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a neutral position key point identification according to an embodiment of the present application;
fig. 3 is a schematic diagram of identifying key points that are hidden towards the right according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the related art, for a camera-based human-computer interaction obstacle avoidance game, multiple skeletal joints of a human body need to be identified, for example, coordinates of different joints including a head, a neck, a tail cone, a hip and the like, and a standard human skeleton key point model is preset for identification, so that whether a user successfully avoids an obstacle is judged, but the method needs to identify more human skeleton key points, is large in calculation amount, complex in calculation, multiple in steps and low in calculation efficiency.
In view of the above, referring to fig. 1, an embodiment of the present invention provides an obstacle avoidance recognition method, including:
s101, carrying out framing treatment on an input video to be identified to obtain an image set to be identified;
S102, preprocessing the image set to be recognized through a pre-trained human body region marking model to obtain a preprocessed image set;
s103, inputting the preprocessed image set into a pre-trained key point recognition model for key point labeling processing to obtain a labeled image set, wherein the key points comprise nose tips, shoulder joint points and hip joint points of a human body;
S104, acquiring the appearance position and appearance time of the obstacle;
S105, carrying out avoidance recognition processing on the marked image set according to the appearance position and the appearance time of the obstacle, and obtaining an obstacle avoidance recognition result.
In the embodiment of the invention, the input video to be identified is subjected to framing treatment to obtain the image set to be identified, and then the image set to be identified is subjected to frame-by-frame analysis, so that a real-time obstacle avoidance identification result can be obtained. The embodiment of the invention carries out pretreatment on the image set to be identified through a pre-trained human body region marking model to obtain a pretreated image set, and the pretreatment mainly filters the background through the human body region marking model, marks and cuts out the human body region so as to reduce the influence of background noise; and generating a color thermodynamic diagram according to the cut human body region image, inputting the human body region image and the color thermodynamic diagram as a preprocessing image set into a key point recognition model for key point recognition, wherein the key points comprise nose tips, shoulder joint points and hip joint points of the human body, and recognizing and labeling to obtain a labeling image set. And then, obtaining the appearance position and appearance time of the obstacle, and carrying out avoidance recognition processing on the marked image set according to the appearance position and appearance time of the obstacle to obtain an obstacle avoidance recognition result. According to the embodiment of the invention, a small number of key points are identified through the human body region marking model and the key point identification model, so that the identification processing time is shortened, and the avoidance identification efficiency is improved.
Further as a preferred embodiment, in the step S101, the framing processing is performed on the input video to be identified to obtain a set of images to be identified, including:
carrying out video division processing on an input video to be identified to obtain a plurality of video segments;
and extracting each video segment according to the preset frame rate to obtain an image set to be identified.
In the embodiment of the invention, in the camera-based man-machine interaction obstacle avoidance game, a camera for acquiring the limb actions of a user is preset on intelligent terminal equipment, the intelligent terminal equipment can comprise intelligent equipment capable of carrying out man-machine interaction such as a portable computer, an intelligent television and a tablet personal computer, the camera is arranged on the intelligent terminal equipment to acquire the limb actions of the user, and the camera is carried out according to the starting time of the man-machine interaction obstacle avoidance game to acquire a video to be identified. Then, the input video to be identified is divided into video segments according to the time period or the video length. In the embodiment of the invention, video division is carried out once per minute to obtain a plurality of video segments, then each video segment is extracted according to the preset frame rate, the preset frame rate can be designed autonomously according to actual conditions, and each frame of image is extracted to obtain an image set to be identified.
Further as a preferred embodiment, before the preprocessing the image set to be identified by the pre-trained body area marking model to obtain a preprocessed image set, the method includes training the body area marking model, and includes the steps of:
Acquiring a human body image training set;
carrying out marking frame selection processing on each image in the human body image training set to obtain a marked image set;
Inputting the marked image set into the human body region marking model to obtain a marking result;
determining a trained loss value according to the marking result and the marking of the marking image set;
And updating parameters of the human body region marking model according to the loss value.
In the embodiment of the invention, the image set to be identified is preprocessed through the pre-trained human body area marking model, and before that, the human body area marking model needs to be trained. According to the embodiment of the invention, the human body region marking model is trained by acquiring the human body image training set, wherein the human body image training set comprises a large number of human body images, and the human body region marking model can be constructed by using a convolutional neural network. According to the embodiment of the invention, a large number of human body images are manually marked, frame selection is carried out on rectangular coordinate points, the marked image set is input into the human body region marking model, and four vertex coordinates of the human body marking frame are output, so that a marking result is obtained. Determining a loss value of training by marking the result and a rectangular coordinate point for marking the marked image set in advance, updating parameters of the human body region marking model according to the loss value, and stopping training the human body region marking model when the loss value, namely the training error, is smaller than a preset value to obtain a trained human body region marking model. According to the embodiment of the invention, the human body region marking model is used for carrying out human body marking frame selection on the image, so that the background noise of the image is reduced, and the accuracy of obstacle avoidance recognition is improved.
Further as a preferred embodiment, the preprocessing the image set to be identified by a pre-trained body area marking model to obtain a preprocessed image set, including:
inputting each frame of image in the image set to be identified into the human body region marking model to obtain an image marking frame set;
Cutting the image set to be identified according to the image marking frame set to obtain a cut image set;
performing thermodynamic diagram generation processing on the cutting image set to obtain a thermodynamic image set;
determining the set of cut images and the set of thermal images as a set of preprocessed images.
In the embodiment of the invention, the image set to be identified is preprocessed through a pre-trained human body region marking model, each frame of image in the image set to be identified is firstly input into the human body region marking model to obtain a marked image marking frame set, and then the image set to be identified is cut according to the image marking frame set, so that redundant background images are removed, and a cut image set is obtained, wherein the cut image set is an image set only comprising human body regions. And then carrying out thermodynamic diagram generation processing on each cut image in the cut image set to obtain a thermodynamic image set. The thermodynamic diagram for generating the cut image may be processed using a thermodynamic generation formula, which is shown below:
Where Y xy denotes a filter coefficient value at an image coordinate point (x, Y), p x is a midpoint value in the image horizontal direction, p y is a midpoint value in the image vertical direction, Representing standard deviation.
The values calculated by the thermodynamic generation formula are superimposed on the original pixel values (Rxy, gxy, bxy) of the original image coordinates (x, y) to obtain new image pixel values (YRxy, YGxy, YBxy), i.e. the thermodynamic image. And finally, determining the cutting image set and the thermal image set as a preprocessing image set, and inputting the preprocessing image set into the key point recognition model for training, so that the recognition accuracy of the key point recognition model is improved, and the accuracy of obstacle avoidance recognition is improved.
Further, as a preferred embodiment, before the preprocessing image set is input into a pre-trained keypoint identification model to perform keypoint labeling processing to obtain a labeled image set, the method includes training the keypoint identification model, and includes the steps of:
Acquiring an identification image training set, wherein the identification image training set comprises a cutting training image and a thermal training image;
inputting the recognition image training set into the key point recognition model, and recognizing the nose tip, the shoulder joint point and the hip joint point of the human body in the recognition image training set to obtain a key point recognition result;
determining a training loss value according to the key point identification result and the label of the identification image training set;
And updating parameters of the key point identification model according to the loss value.
In the embodiment of the invention, the key point recognition model can be built by adopting a convolutional neural network, and the key point recognition model is trained by a recognition image training set, wherein the recognition image training set comprises a large number of cutting training images and thermal training images. The embodiment of the invention also adopts a supervised learning method to train a key point recognition model, inputs a recognition image training set into the key point recognition model by marking key points of the cut training image, recognizes nasal tips, shoulder joint points and hip joint points of a human body in the recognition image training set by the key point recognition model, and outputs a key point recognition result. And determining a trained loss value according to the key point recognition result output by the model and the label of the pre-marked recognition image training set, and updating parameters of the key point recognition model according to the loss value. The key point identification model in the embodiment of the invention identifies the nose tip, the shoulder joint point and the hip joint point of the human body, wherein the shoulder joint point comprises a left shoulder joint point and a right shoulder joint point, and the hip joint point comprises a left hip joint point and a right hip joint point.
Further, as a preferred embodiment, the performing avoidance recognition processing on the labeled image set according to the occurrence position and occurrence time of the obstacle to obtain an obstacle avoidance recognition result, including:
Selecting an image of the human body in a neutral position from the labeling image set as a reference image;
Acquiring a marked key point according to the identification result of the reference image, and determining a reference vertical line according to the marked key point;
obtaining an image to be judged from the marked image set according to the occurrence time of the obstacle;
And carrying out avoidance judgment according to the appearance position of the obstacle, the reference vertical line and the key point identification result of the image to be judged, and obtaining an obstacle avoidance identification result.
In the embodiment of the invention, an image of the human body in the neutral position is selected from the labeling image set to serve as a reference image, and in the camera-based human-computer interaction obstacle avoidance game, a user can be reminded of being in the neutral position by prompting on a display interface of the intelligent terminal equipment, and then the image of the human body in the neutral position is acquired by the camera to serve as the reference image. Referring to fig. 2, a marking key point including a nose tip a, a left shoulder joint point B, a right shoulder joint point C, a left hip joint point D, a right hip joint point E is obtained according to the recognition result of the reference image, and a reference vertical line is determined according to the marking key point. According to the embodiment of the invention, a vertical line passing through the middle point of the point D and the point E in the reference image is marked as the reference vertical line L, and then the image to be judged is obtained from the marked image set according to the occurrence time of the obstacle, and each marked image in the marked image set is provided with a time tag, so that the image to be judged can be obtained according to the occurrence time of the obstacle to avoid and identify the obstacle, and the calculation complexity is reduced. Finally, carrying out avoidance judgment according to the appearance position of the obstacle, the reference vertical line and the key point recognition result of the image to be judged, and obtaining the obstacle avoidance recognition result.
In a possible embodiment, a camera for acquiring the limb actions of the user is preset on the intelligent terminal equipment, and a human body region marking model and a key point identification model for identifying the nose tip, the shoulder joint point and the hip joint point of the human body are trained in advance. The intelligent terminal device referred to herein may be: a mobile phone with a camera, an iPad, a computer, a television, an intelligent interaction large screen and the like. The camera on the intelligent terminal equipment is used for monitoring user information, and when the limb image above the hip joint of the user is acquired, the user is identified as an operation user; when limb images of a plurality of users are acquired simultaneously (namely, when a plurality of users are framed simultaneously), the user with the largest occupied area is taken as an operation user. After the operation user determines to start the game, the user station is reminded to be in the neutral position through the game interface prompt. And extracting images above the hip joint of the operating user through a pre-trained human body region marking model and a key point identification model, and identifying and marking key points as reference images. Referring to fig. 3, when an obstacle appears on the left side of the screen, it is assumed that the operation user makes avoidance toward the right side, where VD AC is a horizontal distance in which a point to C point are projected in the vertical direction, and VD AB (not shown) is a horizontal distance in which a point to B point are projected in the vertical direction. Then the user can be determined to evade to the right when point B moves to the right of reference vertical line L and VD AC/VDAB is less than the threshold. Similarly, when the point C moves to the left of the reference vertical line L and VD AB/VDAC is smaller than the threshold value, it can be determined that the user is evading to the left; and combining and comparing the occurrence positions and time periods of the obstacles in the game, and judging whether the user successfully completes one-time avoidance operation. The threshold in the embodiment of the invention can be set to 0.5, and can be adjusted according to actual conditions. The embodiment of the invention can be applied to the real-time game of man-machine interaction, and the man-machine interaction game is one of the intelligent televisions on the intelligent televisions with cameras. The game rules are to keep the legs of the user motionless and avoid free-falling eggs by moving the shoulders left and right. After the game is started, the camera identifies the game user in real time. When eggs in free falling bodies appear in the game interface, and before the eggs fall at 1/2 of the height of the game interface, if the result obtained by identifying that a game user is the completion of one-time avoidance action according to the obstacle avoidance identifying method provided by the embodiment of the invention is identified, the eggs are successfully avoided, and if the result obtained by identifying that the user is the failure of one-time avoidance action according to the obstacle avoidance identifying method provided by the embodiment of the invention, the eggs are not successfully avoided, the eggs are smashed, and the game is failed.
On the other hand, the embodiment of the invention also provides an obstacle avoidance recognition system, which comprises: the first module is used for framing the input video to be identified to obtain an image set to be identified;
the second module is used for preprocessing the image set to be identified through a pre-trained human body region marking model to obtain a preprocessed image set;
The third module is used for inputting the preprocessed image set into a pre-trained key point recognition model to carry out key point labeling processing to obtain a labeled image set;
a fourth module for acquiring the appearance position and appearance time of the obstacle;
And a fifth module, configured to perform avoidance recognition processing on the labeled image set according to the occurrence position and occurrence time of the obstacle, so as to obtain an obstacle avoidance recognition result.
Further as a preferred embodiment, the first module is configured to perform framing processing on an input video to be identified to obtain a set of images to be identified, and includes:
The first unit is used for carrying out video division processing on the input video to be identified to obtain a plurality of video segments;
and the second unit is used for extracting each video segment according to the preset frame rate to obtain an image set to be identified.
It can be understood that the content in the above embodiment of the obstacle avoidance recognition method is applicable to the embodiment of the present system, and the functions specifically implemented by the embodiment of the present system are the same as those of the embodiment of the above embodiment of the obstacle avoidance recognition method, and the beneficial effects achieved by the embodiment of the present system are the same as those achieved by the embodiment of the above embodiment of the obstacle avoidance recognition method.
Corresponding to the method of fig. 1, the embodiment of the invention also provides an electronic device, which comprises a processor and a memory; the memory is used for storing programs; the processor executes the program to implement the method as described above.
Corresponding to the method of fig. 1, an embodiment of the present invention also provides a computer-readable storage medium storing a program to be executed by a processor to implement the method as described above.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
In summary, the embodiment of the invention has the following advantages: according to the embodiment of the invention, only the three joint points of the nose tip, the shoulder joint and the hip joint are identified through the human body region marking model and the key point identification model, and the vertical distance and the ratio between the two joint points are further identified and judged, so that whether a user can successfully avoid an obstacle can be judged, the complexity of avoidance identification is reduced, and the processing efficiency of avoidance identification is improved.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the embodiments described above, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.

Claims (9)

1. A method of obstacle avoidance recognition, the method comprising:
carrying out framing treatment on the input video to be identified to obtain an image set to be identified;
preprocessing the image set to be identified through a pre-trained human body region marking model to obtain a preprocessed image set;
Inputting the preprocessed image set into a pre-trained key point recognition model for key point labeling treatment to obtain a labeled image set, wherein the key points comprise nose tips, shoulder joint points and hip joint points of a human body;
Acquiring the appearance position and appearance time of the obstacle;
carrying out avoidance recognition processing on the marked image set according to the appearance position and the appearance time of the obstacle to obtain an obstacle avoidance recognition result;
performing avoidance recognition processing on the labeling image set according to the appearance position and appearance time of the obstacle to obtain an obstacle avoidance recognition result, wherein the method comprises the following steps of:
Selecting an image of the human body in a neutral position from the labeling image set as a reference image;
Acquiring a marked key point according to the identification result of the reference image, and determining a reference vertical line according to the marked key point;
obtaining an image to be judged from the marked image set according to the occurrence time of the obstacle;
And carrying out avoidance judgment according to the appearance position of the obstacle, the reference vertical line and the key point identification result of the image to be judged, and obtaining an obstacle avoidance identification result.
2. The method according to claim 1, wherein the framing the input video to be identified to obtain the set of images to be identified comprises:
carrying out video division processing on an input video to be identified to obtain a plurality of video segments;
and extracting each video segment according to the preset frame rate to obtain an image set to be identified.
3. The method of claim 1, comprising training the body region marker model before the preprocessing the set of images to be identified by the pre-trained body region marker model to obtain a preprocessed set of images, the step comprising:
Acquiring a human body image training set;
carrying out marking frame selection processing on each image in the human body image training set to obtain a marked image set;
Inputting the marked image set into the human body region marking model to obtain a marking result;
determining a trained loss value according to the marking result and the marking of the marking image set;
And updating parameters of the human body region marking model according to the loss value.
4. The method according to claim 1, wherein the preprocessing the set of images to be identified by the pre-trained body region labeling model to obtain a preprocessed set of images comprises:
inputting each frame of image in the image set to be identified into the human body region marking model to obtain an image marking frame set;
Cutting the image set to be identified according to the image marking frame set to obtain a cut image set;
performing thermodynamic diagram generation processing on the cutting image set to obtain a thermodynamic image set;
the thermodynamic diagram generating process is performed on the cutting image set to obtain a thermodynamic image set, and the thermodynamic image generating process comprises the following steps:
Calculating each cutting image in the cutting image set according to a thermodynamic generation formula to obtain a filtering coefficient value set;
superposing the filter coefficient value set and the pixel value of the corresponding cutting image in the cutting image set to obtain a thermal image set;
determining the set of cut images and the set of thermal images as a set of preprocessed images.
5. The method of claim 1, comprising training the keypoint identification model prior to inputting the preprocessed image set into a pre-trained keypoint identification model for keypoint labeling, the step comprising:
Acquiring an identification image training set, wherein the identification image training set comprises a cutting training image and a thermal training image;
inputting the recognition image training set into the key point recognition model, and recognizing the nose tip, the shoulder joint point and the hip joint point of the human body in the recognition image training set to obtain a key point recognition result;
determining a training loss value according to the key point identification result and the label of the identification image training set;
And updating parameters of the key point identification model according to the loss value.
6. An obstacle avoidance recognition system, the system comprising:
The first module is used for framing the input video to be identified to obtain an image set to be identified;
the second module is used for preprocessing the image set to be identified through a pre-trained human body region marking model to obtain a preprocessed image set;
The third module is used for inputting the preprocessed image set into a pre-trained key point recognition model to carry out key point labeling processing to obtain a labeled image set;
a fourth module for acquiring the appearance position and appearance time of the obstacle;
A fifth module, configured to perform avoidance recognition processing on the labeled image set according to the occurrence position and occurrence time of the obstacle, so as to obtain an obstacle avoidance recognition result;
The fifth module is configured to perform avoidance recognition processing on the labeled image set according to the occurrence position and occurrence time of the obstacle, to obtain an obstacle avoidance recognition result, and includes:
Selecting an image of the human body in a neutral position from the labeling image set as a reference image;
Acquiring a marked key point according to the identification result of the reference image, and determining a reference vertical line according to the marked key point;
obtaining an image to be judged from the marked image set according to the occurrence time of the obstacle;
And carrying out avoidance judgment according to the appearance position of the obstacle, the reference vertical line and the key point identification result of the image to be judged, and obtaining an obstacle avoidance identification result.
7. The system of claim 6, wherein the first module configured to frame the input video to be identified to obtain the set of images to be identified comprises:
The first unit is used for carrying out video division processing on the input video to be identified to obtain a plurality of video segments;
and the second unit is used for extracting each video segment according to the preset frame rate to obtain an image set to be identified.
8. An electronic device comprising a memory and a processor;
the memory is used for storing programs;
The processor executing the program implements the method of any one of claims 1 to 5.
9. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of any one of claims 1 to 5.
CN202211634663.XA 2022-12-19 2022-12-19 Obstacle avoidance recognition method, system, electronic equipment and storage medium Active CN115810203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211634663.XA CN115810203B (en) 2022-12-19 2022-12-19 Obstacle avoidance recognition method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211634663.XA CN115810203B (en) 2022-12-19 2022-12-19 Obstacle avoidance recognition method, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115810203A CN115810203A (en) 2023-03-17
CN115810203B true CN115810203B (en) 2024-05-10

Family

ID=85486148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211634663.XA Active CN115810203B (en) 2022-12-19 2022-12-19 Obstacle avoidance recognition method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115810203B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598237A (en) * 2016-11-30 2017-04-26 宇龙计算机通信科技(深圳)有限公司 Game interaction method and device based on virtual reality
CN107894773A (en) * 2017-12-15 2018-04-10 广东工业大学 A kind of air navigation aid of mobile robot, system and relevant apparatus
CN108985259A (en) * 2018-08-03 2018-12-11 百度在线网络技术(北京)有限公司 Human motion recognition method and device
KR102151494B1 (en) * 2019-11-12 2020-09-03 가천대학교 산학협력단 Active feedback virtual reality system for brain and physical health through user's activity
CN112052786A (en) * 2020-09-03 2020-12-08 上海工程技术大学 Behavior prediction method based on grid division skeleton
CN112473121A (en) * 2020-11-13 2021-03-12 海信视像科技股份有限公司 Display device and method for displaying dodging ball based on limb recognition
CN112990057A (en) * 2021-03-26 2021-06-18 北京易华录信息技术股份有限公司 Human body posture recognition method and device and electronic equipment
KR20210110064A (en) * 2020-02-28 2021-09-07 엘지전자 주식회사 Moving Robot and controlling method
CN113709411A (en) * 2020-05-21 2021-11-26 陈涛 Sports auxiliary training system of MR intelligent glasses based on eye movement tracking technology
CN114582030A (en) * 2022-05-06 2022-06-03 湖北工业大学 Behavior recognition method based on service robot
CN115213903A (en) * 2022-07-19 2022-10-21 深圳航天科技创新研究院 Mobile robot path planning method and device based on obstacle avoidance
CN115424236A (en) * 2022-08-15 2022-12-02 南京航空航天大学 Pedestrian crossing trajectory prediction method integrating pedestrian intention and social force models

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11449061B2 (en) * 2016-02-29 2022-09-20 AI Incorporated Obstacle recognition method for autonomous robots
US10843077B2 (en) * 2018-06-08 2020-11-24 Brian Deller System and method for creation, presentation and interaction within multiple reality and virtual reality environments

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598237A (en) * 2016-11-30 2017-04-26 宇龙计算机通信科技(深圳)有限公司 Game interaction method and device based on virtual reality
CN107894773A (en) * 2017-12-15 2018-04-10 广东工业大学 A kind of air navigation aid of mobile robot, system and relevant apparatus
CN108985259A (en) * 2018-08-03 2018-12-11 百度在线网络技术(北京)有限公司 Human motion recognition method and device
KR102151494B1 (en) * 2019-11-12 2020-09-03 가천대학교 산학협력단 Active feedback virtual reality system for brain and physical health through user's activity
KR20210110064A (en) * 2020-02-28 2021-09-07 엘지전자 주식회사 Moving Robot and controlling method
CN113709411A (en) * 2020-05-21 2021-11-26 陈涛 Sports auxiliary training system of MR intelligent glasses based on eye movement tracking technology
CN112052786A (en) * 2020-09-03 2020-12-08 上海工程技术大学 Behavior prediction method based on grid division skeleton
CN112473121A (en) * 2020-11-13 2021-03-12 海信视像科技股份有限公司 Display device and method for displaying dodging ball based on limb recognition
CN112990057A (en) * 2021-03-26 2021-06-18 北京易华录信息技术股份有限公司 Human body posture recognition method and device and electronic equipment
CN114582030A (en) * 2022-05-06 2022-06-03 湖北工业大学 Behavior recognition method based on service robot
CN115213903A (en) * 2022-07-19 2022-10-21 深圳航天科技创新研究院 Mobile robot path planning method and device based on obstacle avoidance
CN115424236A (en) * 2022-08-15 2022-12-02 南京航空航天大学 Pedestrian crossing trajectory prediction method integrating pedestrian intention and social force models

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Path Planning and Evaluation for Obstacle Avoidance of Manipulator Based on Improved Artificial Potential Field and Danger Field;Zhao, Jiangbo 等;《PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021)》;20211231;全文 *
人机合作多模式兵乓球机器人竞赛系统;仇婷;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20150815(第8期);全文 *

Also Published As

Publication number Publication date
CN115810203A (en) 2023-03-17

Similar Documents

Publication Publication Date Title
US11907848B2 (en) Method and apparatus for training pose recognition model, and method and apparatus for image recognition
US12039454B2 (en) Microexpression-based image recognition method and apparatus, and related device
CN107784294B (en) Face detection and tracking method based on deep learning
CN109176512A (en) A kind of method, robot and the control device of motion sensing control robot
CN111222486B (en) Training method, device and equipment for hand gesture recognition model and storage medium
CN105426827A (en) Living body verification method, device and system
CN110531853B (en) Electronic book reader control method and system based on human eye fixation point detection
CN110796018A (en) Hand motion recognition method based on depth image and color image
CN109635752A (en) Localization method, face image processing process and the relevant apparatus of face key point
CN111126280B (en) Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method
CN109325408A (en) A kind of gesture judging method and storage medium
CN113128368A (en) Method, device and system for detecting character interaction relationship
CN111222379A (en) Hand detection method and device
CN116051631A (en) Light spot labeling method and system
CN116109455A (en) Language teaching auxiliary system based on artificial intelligence
CN115810203B (en) Obstacle avoidance recognition method, system, electronic equipment and storage medium
Palomino et al. A novel biologically inspired attention mechanism for a social robot
CN111078008B (en) Control method of early education robot
CN116704603A (en) Action evaluation correction method and system based on limb key point analysis
CN112655021A (en) Image processing method, image processing device, electronic equipment and storage medium
Patel et al. Gesture Recognition Using MediaPipe for Online Realtime Gameplay
CN113505729A (en) Interview cheating detection method and system based on human body face movement unit
CN114779925A (en) Sight line interaction method and device based on single target
CN113870639A (en) Training evaluation method and system based on virtual reality
CN109144237B (en) Multi-channel man-machine interactive navigation method for robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant