CN113051968B - Violent sorting behavior identification method and device and computer readable storage medium - Google Patents

Violent sorting behavior identification method and device and computer readable storage medium Download PDF

Info

Publication number
CN113051968B
CN113051968B CN201911369379.2A CN201911369379A CN113051968B CN 113051968 B CN113051968 B CN 113051968B CN 201911369379 A CN201911369379 A CN 201911369379A CN 113051968 B CN113051968 B CN 113051968B
Authority
CN
China
Prior art keywords
coordinate system
pose
parabolic
under
sorting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911369379.2A
Other languages
Chinese (zh)
Other versions
CN113051968A (en
Inventor
杨小平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN201911369379.2A priority Critical patent/CN113051968B/en
Publication of CN113051968A publication Critical patent/CN113051968A/en
Application granted granted Critical
Publication of CN113051968B publication Critical patent/CN113051968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a violent sorting behavior identification method, a violent sorting behavior identification device and a computer-readable storage medium, wherein the violent sorting behavior identification method comprises the following steps: acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under a solid camera coordinate system and pose of a parabolic surface under a preset coordinate system based on a plurality of images of a parabolic sorting video; calculating the pose of the virtual camera under a preset coordinate system based on the pose of the paraboloid under the preset coordinate system; correcting the position information of the parabolic track under the coordinate system of the entity camera based on the pose of the entity camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system; violent sorting actions are identified based on position information of the parabolic trajectories in the virtual camera coordinate system. The method and the device can improve the accuracy of violent sorting behavior identification.

Description

Violent sorting behavior identification method and device and computer readable storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and apparatus for identifying violent sorting behaviors, and a computer readable storage medium.
Background
In the logistics industry, violent sorting acts affect the image of the company and cause great economic loss to the company. The existing identification basically depends on manual inspection and monitoring of video and other modes for supervision, and has great subjectivity. A lot of labor cost is required and only sampling inspection is performed. And because the camera may not shoot the parabolic track right against, the parabolic track shot in an oblique direction is obtained. The parabolic track photographed in an inclined manner has a large difference from the real parabolic track, so that violent sorting behaviors are very inaccurate to identify according to the parabolic track photographed in an inclined manner.
Namely, in the prior art, the violence sorting behavior is identified by relying on the distorted parabolic track, and the accuracy is not high.
Disclosure of Invention
The embodiment of the application provides a violent sorting behavior identification method, a violent sorting behavior identification device and a computer readable storage medium, which aim at solving the problem of correcting distorted parabolic tracks so as to identify violent sorting behaviors and improve the accuracy of violent sorting behavior identification.
In a first aspect, the present application provides a method for identifying violent sorting actions, the method comprising:
acquiring a parabolic sorting video;
Acquiring the pose of the entity camera under a preset coordinate system;
acquiring position information of a parabolic track under a solid camera coordinate system and a pose of a parabolic surface under a preset coordinate system based on a plurality of images of the parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located;
calculating the pose of the virtual camera under a preset coordinate system based on the pose of the paraboloid under the preset coordinate system, and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the solid camera and the paraboloid;
correcting the position information of the parabolic track under the physical camera coordinate system based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system;
and identifying violent sorting behaviors based on the position information of the parabolic track under the virtual camera coordinate system.
Wherein, acquire parabolic letter sorting video, include:
acquiring a sorting video;
splitting the sorting video to obtain a plurality of sorting sub-videos;
And performing image detection on the images in the sorting sub-videos to extract parabolic sorting videos from the plurality of sorting sub-videos.
The method for acquiring the position information of the parabolic track under the solid camera coordinate system and the pose of the parabolic surface under the preset coordinate system based on the multiple images of the parabolic sorting video comprises the following steps:
acquiring a throwing area;
intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images;
performing image detection on the plurality of intercepted images to obtain position information of the parabolic track under a solid camera coordinate system;
and acquiring the pose of the paraboloid under a preset coordinate system.
The obtaining the pose of the paraboloid under the preset coordinate system comprises the following steps:
acquiring a preset image from the plurality of intercepted images;
drawing the parabolic track on the preset image based on the position information of the parabolic track under a solid camera coordinate system so as to acquire the parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image;
and carrying out regression analysis on the parabolic image to obtain the pose of the parabolic surface under a preset coordinate system.
Wherein the acquiring the throw area includes:
performing image fusion on a plurality of images in the parabolic sorting video to obtain a fusion image;
and performing image detection on the fused image to acquire the throwing area.
The correcting the position information of the parabolic track under the physical camera coordinate system based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system includes:
calculating a first pose conversion relationship between the entity camera coordinate system and the preset coordinate system based on the pose of the entity camera under the preset coordinate system;
calculating a second pose conversion relation between a parabolic coordinate system of the paraboloid and the preset coordinate system based on the pose of the paraboloid under the preset coordinate system;
calculating a third pose conversion relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera under the preset coordinate system and the pose of the virtual camera under the preset coordinate system;
Determining a fourth pose conversion relationship between the entity camera coordinate system and the virtual camera coordinate system based on the first pose conversion relationship, the second pose conversion relationship and a third pose conversion relationship;
and correcting the position information of the parabolic track under the solid camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track under the virtual camera coordinate system.
Wherein the identifying violent sorting behavior based on the location information of the parabolic trajectory in the virtual camera coordinate system comprises:
acquiring a starting point coordinate and an ending point coordinate of the parabolic track under the virtual camera coordinate system based on the position information of the parabolic track under the virtual camera coordinate system;
determining a parabolic distance based on a start point coordinate and an end point coordinate of the parabolic track under the virtual camera coordinate system;
violent sorting actions are identified based on the parabolic distance.
In a second aspect, the present application provides a violent sorting action recognition device, the violent sorting action recognition device comprising:
the first acquisition unit is used for acquiring a parabolic sorting video;
the second acquisition unit is used for acquiring the pose of the entity camera under a preset coordinate system;
The third acquisition unit is used for acquiring position information of a parabolic track under a solid camera coordinate system and pose of a parabolic surface under a preset coordinate system based on a plurality of images of the parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located;
the pose calculating unit is used for calculating the pose of the virtual camera under the preset coordinate system based on the pose of the paraboloid under the preset coordinate system and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid;
the correcting unit is used for correcting the position information of the parabolic track under the physical camera coordinate system based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system;
and the identification unit is used for identifying violent sorting behaviors based on the position information of the parabolic track under the virtual camera coordinate system.
The first acquisition unit is also used for acquiring sorting videos;
splitting the sorting video to obtain a plurality of sorting sub-videos;
And performing image detection on the images in the sorting sub-videos to extract parabolic sorting videos from the plurality of sorting sub-videos.
The third acquisition unit is further used for acquiring a throwing area;
intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images;
performing image detection on the plurality of intercepted images to obtain position information of the parabolic track under a solid camera coordinate system;
and acquiring the pose of the paraboloid under a preset coordinate system.
The third obtaining unit is further used for obtaining a preset image from the plurality of intercepted images;
drawing the parabolic track on the preset image based on the position information of the parabolic track under a solid camera coordinate system so as to acquire the parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image;
and carrying out regression analysis on the parabolic image to obtain the pose of the parabolic surface under a preset coordinate system.
The third acquisition unit is further used for performing image fusion on the plurality of images in the parabolic sorting video to obtain a fused image;
And performing image detection on the fused image to acquire the throwing area.
The correcting unit is further used for calculating a first pose conversion relation between the entity camera coordinate system and the preset coordinate system based on the pose of the entity camera under the preset coordinate system;
calculating a second pose conversion relation between a parabolic coordinate system of the paraboloid and the preset coordinate system based on the pose of the paraboloid under the preset coordinate system;
calculating a third pose conversion relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera under the preset coordinate system and the pose of the virtual camera under the preset coordinate system;
determining a fourth pose conversion relationship between the entity camera coordinate system and the virtual camera coordinate system based on the first pose conversion relationship, the second pose conversion relationship and a third pose conversion relationship;
and correcting the position information of the parabolic track under the solid camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track under the virtual camera coordinate system.
The identification unit is further used for acquiring a starting point coordinate and an ending point coordinate of the parabolic track under the virtual camera coordinate system based on the position information of the parabolic track under the virtual camera coordinate system;
determining a parabolic distance based on a start point coordinate and an end point coordinate of the parabolic track under the virtual camera coordinate system;
violent sorting actions are identified based on the parabolic distance.
In a third aspect, the present application provides an electronic device, the electronic device comprising:
one or more processors;
a memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the violent sorting behavior identification method of any one of the first aspects.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program to be loaded by a processor for performing the steps of the method of identification of violent sorting actions of any of the first aspects.
According to the violent sorting behavior recognition method, the pose of the virtual camera under the preset coordinate system is calculated based on the pose of the paraboloid under the preset coordinate system, and the position information of the paraboloid track under the coordinate system is corrected based on the pose of the entity camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system, so that the position information of the paraboloid track under the virtual camera coordinate system is obtained, and the violent sorting behavior is recognized according to the position information of the paraboloid track under the virtual camera coordinate system. Because the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid, the parabolic track shot by the virtual camera can be closer to the real parabolic track than the parabolic track shot by the physical camera, and after the position information of the parabolic track under the physical camera coordinate system is corrected to the position information of the parabolic track under the virtual camera coordinate system, the obtained parabolic track is closer to the real parabolic track, so that the accuracy of identification of violent sorting behaviors can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a scenario of a express sorting system provided in an embodiment of the present application;
FIG. 2 is a flow chart of one embodiment of a method for identifying violent sorting actions provided in embodiments of the present application;
FIG. 3 is a schematic view of a scenario of the violent sorting behavior identification method of FIG. 2;
FIG. 4 is a schematic flow chart of S21 in the violent sorting behavior recognition method of FIG. 2;
FIG. 5 is a schematic flow chart of S23 in the violent sorting behavior recognition method of FIG. 2;
FIG. 6 is a flow chart of S25 in the violent sorting behavior recognition method of FIG. 2;
FIG. 7 is a schematic diagram of an embodiment of a violent sorting behavior recognition device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the present application, it should be understood that the terms "center," "longitudinal," "transverse," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate an orientation or positional relationship based on that shown in the drawings, merely for convenience of description and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be configured and operated in a particular orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In this application, the term "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been shown in detail to avoid obscuring the description of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
The embodiment of the application provides a violent sorting behavior identification method, a violent sorting behavior identification device, electronic equipment and a storage medium. The following will describe in detail.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of a express sorting system provided in an embodiment of the present application, where the express sorting system may include an electronic device 100, and a violent sorting behavior recognition device is integrated in the electronic device 100, such as the electronic device in fig. 1.
In the embodiment of the application, the electronic device 100 is mainly used for acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under a solid camera coordinate system and pose of a parabolic surface under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located; calculating the pose of the virtual camera under a preset coordinate system based on the pose of the paraboloid under the preset coordinate system, wherein the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid; correcting the position information of the parabolic track under the coordinate system of the entity camera based on the pose of the entity camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system; violent sorting actions are identified based on position information of the parabolic trajectories in the virtual camera coordinate system.
In this embodiment of the present application, the electronic device 100 may be an independent server, or may be a server network or a server cluster formed by servers, for example, the electronic device 100 described in the embodiment of the present application includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud server formed by a plurality of servers. Wherein the Cloud server is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing).
It will be appreciated by those skilled in the art that the application environment shown in fig. 1 is merely an application scenario with the present application and is not limited to the application scenario with the present application, and other application environments may also include more or fewer electronic devices than those shown in fig. 1, for example, only 1 electronic device is shown in fig. 1, and it will be appreciated that the express sorting system may also include one or more other services, which is not limited herein.
In addition, as shown in fig. 1, the express sorting system may further include a physical camera 200 for acquiring video data. The entity camera 200 may be installed at a preset position of the sorting site so as to photograph the sorting site to obtain video data.
In other embodiments, the electronic device 100 may be the physical camera 200 directly, and the physical camera 200 has the computing function of the electronic device and the function of capturing video.
It should be noted that, the schematic view of the scenario of the express sorting system shown in fig. 1 is only an example, and the express sorting system and scenario described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the express sorting system and the appearance of a new service scenario, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
Firstly, an embodiment of the present application provides a method for identifying violent sorting actions, where the method for identifying violent sorting actions includes:
acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under a solid camera coordinate system and pose of a parabolic surface under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located; calculating the pose of the virtual camera under a preset coordinate system based on the pose of the paraboloid under the preset coordinate system, wherein the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid; correcting the position information of the parabolic track under the coordinate system of the entity camera based on the pose of the entity camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system; violent sorting actions are identified based on position information of the parabolic trajectories in the virtual camera coordinate system.
Referring to fig. 2 and 3, fig. 2 is a schematic flow chart of an embodiment of a method for identifying violent sorting actions in an embodiment of the present application; fig. 3 is a schematic view of a scenario of the violent sorting behavior recognition method of fig. 2.
With reference to fig. 2 and 3, the violent sorting behavior identification method includes:
s21, acquiring a parabolic sorting video.
In this embodiment, referring specifically to fig. 4, fig. 4 is a schematic flow chart of S21 in the method for identifying violent sorting behavior in fig. 2.
As shown in fig. 4, S21 includes S211, S212, and S213. The method comprises the following steps:
s211, acquiring sorting videos.
The entity camera 200 is installed at a preset position of the sorting site, so that the sorting site is photographed to obtain a sorting video. The sorting video shot by the entity camera 200 may be directly sent to the electronic device or may be stored in the memory, and the electronic device obtains the sorting video by reading data from the memory. The sorting videos shot by the entity camera 200 can be directly sent to the electronic equipment for processing, so that instantaneity of violent sorting behavior identification can be realized.
S212, segmenting the sorted videos to obtain a plurality of sorted sub-videos.
Specifically, the sorting video is segmented according to a preset duration. The preset duration may be 2s,1s, etc. Thus, the split video is split, and a plurality of split sub-videos can be obtained.
S213, performing image detection on the images in the sorting sub-videos to extract a parabolic sorting video from the plurality of sorting sub-videos.
In a specific embodiment, by performing image detection on two images of the sorting sub-video, determining whether the positions of the express items in the two images are changed, if so, determining that the sorting sub-video is a parabolic sorting video, and acquiring the sorting sub-video.
S22, acquiring the pose of the entity camera under a preset coordinate system.
In the embodiment of the application, the pose comprises position information and pose information. The position information is a three-dimensional coordinate of the object under a preset coordinate system, and the posture information comprises a pitch angle, a roll angle and a pitch angle of the object under the preset coordinate system. The preset coordinate system may be a world coordinate system. A preset coordinate system is established, and the pose of the physical camera 200 under the preset coordinate system is acquired.
In the embodiment of the present application, regression analysis is performed on a plurality of images of the parabolic sorting video, so as to obtain the pose of the entity camera 200 under a preset coordinate system. Of course, in other embodiments, the pre-stored pose of the physical camera 200 under the preset coordinate system may be read, which is not limited in this application.
Specifically, the origin O1 of the preset coordinate system is set at a distance h directly below the physical camera, the Y axis of the preset coordinate system is set in the vertical direction, and the XOZ plane of the preset coordinate system is set in the horizontal plane. The origin of the physical camera coordinate system is O2. So that the pose of the physical camera 200 in the preset coordinate system can be acquired. For example, the coordinates of the physical camera 200 in the preset coordinate system are (0, h, 0), and the pose of the physical camera 200 in the preset coordinate system is: pitch angle of theta 1 . Of course, the preset coordinate system may also be established according to specific requirements, which is not limited in this application.
S23, acquiring position information of a parabolic track under a solid camera coordinate system and pose of a parabolic surface under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located.
In the embodiment of the application, a parabolic image is acquired based on a plurality of images in a parabolic sorting video, wherein the parabolic image comprises a parabolic track and a shooting background image; regression analysis is performed on the parabolic image to obtain the pose of the parabola 28 in the preset coordinate system.
Referring to fig. 5, fig. 5 is a schematic flow chart of S23 in the violent sorting behavior recognition method of fig. 2.
As shown in fig. 5, in the embodiment of the present application, S23 includes S231, S232, S233, S234. The method comprises the following steps:
s231, acquiring a throwing area.
In the embodiment of the application, image fusion is carried out on a plurality of images in a parabolic sorting video to obtain a fused image; and performing image detection on the fused image to acquire a throwing area.
In a specific embodiment, all frame images in the parabolic sorting video are extracted, and pixel values of all the extracted frame images are overlapped to obtain a fusion image. In another specific embodiment, partial frame images in the parabolic sorting video are extracted, for example, only odd frame images are extracted, and pixel value superposition is performed to obtain a fused image, so that the calculation efficiency can be improved. In other embodiments, image fusion of multiple images in the parabolic sorting video may be performed in other manners, which is not limited in this application.
In a specific embodiment, the fused image is image detected over a yolo network to obtain a throw area. The yolo network is based on a single end-to-end network, and the input image can determine the object area from the input of the original image to the output of the object position and the category. The yolo network is a convolutional neural network capable of predicting a plurality of object positions and categories at one time, can realize end-to-end target detection and identification, and has the greatest advantage of high speed. Of course, in other embodiments, the target detection model may be Fast R-CNN, or the like, which is not limited in this application.
S232, intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images.
In the embodiment of the application, the unfused multi-frame images in the parabolic sorting video are intercepted based on the throwing area, so that a plurality of intercepted images are obtained. Therefore, each captured image has a shot background image and a shot object corresponding to the shooting time.
S233, performing image detection on the plurality of intercepted images to obtain the position information of the parabolic track under the solid camera coordinate system.
In the embodiment of the application, image detection is performed on a plurality of intercepted images through a deep neural network so as to obtain the position information of the parabolic track under the solid camera coordinate system. The shot background images and the shot objects corresponding to the shooting time are arranged in each shot image, so that the position information of the shot objects at different times can be obtained, and the position information of the shot object track under the solid camera coordinate system can be obtained.
S234, acquiring the pose of the paraboloid under a preset coordinate system.
In the embodiment of the application, based on the position information of the parabolic track under the solid camera coordinate system, the parabolic track is drawn on a plurality of preset images of the intercepted images so as to acquire the parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image.
In one specific implementationIn an example, the preset image is a first frame image of the plurality of captured images. And drawing the parabolic track on a plurality of preset images of the intercepted images based on the coordinates of the parabolic track under the solid camera coordinate system so as to acquire the parabolic image. Because the preset image is an original image of the sorted video, the obtained parabolic image comprises a parabolic track and a shot background image, and regression analysis is performed on the parabolic image by using the deep neural network model to obtain the pose of the parabolic surface 28 under the preset coordinate system. For example, the pose of the parabola 28 is: the intersection point of the paraboloid 28 and the Z axis of the preset coordinate system is d n The included angle between the paraboloid 28 and the predetermined coordinate system XOY surface is theta 2
S24, calculating the pose of the virtual camera under the preset coordinate system based on the pose of the paraboloid under the preset coordinate system, wherein the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid.
In the embodiment of the application, based on the pose of the entity camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system, calculating the included angle between the optical axis of the entity camera and the paraboloid, judging whether the included angle between the optical axis of the entity camera and the paraboloid is smaller than a preset angle value, if so, acquiring the position information of the paraboloid locus under the entity camera coordinate system and the pose of the paraboloid under the preset coordinate system based on a plurality of images of the paraboloid sorting video, wherein the deviation of the paraboloid locus shot by the entity camera from the real locus is larger; if not, the deviation of the parabolic track shot by the entity camera and the true track is smaller, and violent sorting behaviors are identified based on the position information of the parabolic track under the entity camera coordinate system. The preset angle value is set according to specific situations, for example, 30 degrees, 50 degrees, etc.
In the embodiment of the present application, the pose of the virtual camera under the preset coordinate system is determined based on the pose of the paraboloid 28, wherein the included angle between the optical axis of the virtual camera and the paraboloid 28 is greater than the included angle between the optical axis of the physical camera 200 and the paraboloid 28. Specifically, an included angle between a straight line and a plane is defined to be 0 to 90 degrees.
In a preferred embodiment, the optical axis of the virtual camera is perpendicular to the parabola 28, which further avoids distortion of the parabola trajectory caused by tilting the optical axis of the camera with respect to the parabola 28.
Further, the physical camera 200 is located at the same position as the virtual camera. That is, the origin of the virtual camera coordinate system is O2. Therefore, when the coordinates of the intersection point of the paraboloid 28 and the Z axis of the preset coordinate system are (0, dn), the included angle between the paraboloid 28 and the XOY plane of the preset coordinate system is theta 2 Coordinates of the physical camera 200 in a preset coordinate system are (0, h, 0), and the pose of the physical camera 200 in the preset coordinate system is: pitch angle of theta 1 In this case, the pose of the virtual camera is known as: roll angle theta 2 Coordinates (0, h, 0). Of course, in other embodiments, the position of the virtual camera may be set according to the specific situation, which is not limited in this application.
S25, correcting the position information of the parabolic track under the physical camera coordinate system based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system, and obtaining the position information of the parabolic track under the virtual camera coordinate system.
To illustrate the specific method of correcting parabolic trajectories of the present application, the following assumptions are made:
the coordinates of the intersection point of the paraboloid 28 and the Z axis of the preset coordinate system are (0, dn), and the included angle between the paraboloid 28 and the XOY plane of the preset coordinate system is theta 2 The method comprises the steps of carrying out a first treatment on the surface of the Coordinates of the physical camera 200 in a preset coordinate system are (0, h, 0), and a pitch angle of the physical camera 200 in the preset coordinate system is θ 1 When in use; the coordinates of the virtual camera under a preset coordinate system are (0, h, 0), and the pose is: roll angle theta 2 . The point P is any point on the parabolic trajectory.
In this embodiment, referring to fig. 6, fig. 6 is a schematic flow chart of S25 in the method for identifying violent sorting behavior in fig. 2.
As shown in fig. 6, in the embodiment of the present application, S25 includes S251, S252, S253, S254, and S254.
The method comprises the following steps:
s251, calculating a first pose conversion relation between the physical camera coordinate system and the preset coordinate system based on the pose of the physical camera under the preset coordinate system.
Wherein the first pose transformation relationship comprises a first rotation matrix and a first translation matrix.
In one specific embodiment, the first rotation matrix between the physical camera coordinate system and the preset coordinate system satisfies the relationship as shown in equation (1),
wherein R is 1 Is the first rotation matrix.
The first translation matrix between the physical camera coordinate system and the preset coordinate system satisfies the relationship as shown in formula (1),
wherein T is 1 Is the first rotation matrix.
Combining the formulas (1) and (2), the first pose conversion relationship between the solid camera coordinate system and the preset coordinate system satisfies the relationship shown in the formula (3),
Wherein,is the coordinates of point P in the physical camera coordinate system,/->Is the coordinates of the point P in a preset coordinate system.
S252, calculating a second pose conversion relation between the paraboloid of the paraboloid and a preset coordinate system based on the pose of the paraboloid under the preset coordinate system.
Wherein the second pose conversion relationship includes a second rotation matrix and a second translation matrix.
In one particular embodiment, the parabolic coordinate system is established with the point of intersection of the parabolic surface 28 and the Z axis of the predetermined coordinate system as the origin.
The first rotation matrix between the parabolic coordinate system of the parabolic surface 28 and the preset coordinate system satisfies the relationship as shown in formula (4),
wherein R is 2 Is the second rotation matrix.
The second translation matrix between the parabolic coordinate system of the parabola 28 and the preset coordinate system satisfies the relationship as shown in equation (5),
wherein T is 2 Is the second rotation matrix.
The second pose conversion relationship of the parabola 28 and the preset coordinate system satisfies the relationship as shown in the formula (6),
wherein,is the coordinates of point P in the physical camera coordinate system,/->Is the coordinates of the point P in a preset coordinate system.
S253, calculating a third pose conversion relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera under the preset coordinate system and the pose of the physical camera under the preset coordinate system.
Wherein the third pose transformation relationship comprises a third rotation matrix and a third translation matrix.
In one specific embodiment, the first rotation matrix between the virtual camera coordinate system and the preset coordinate system satisfies the relationship as shown in equation (7),
wherein R is 3 Is the third rotation matrix.
The third translation matrix between the virtual camera coordinate system and the preset coordinate system satisfies the relationship as shown in equation (8),
wherein T is 3 Is a third translation matrix.
In one specific embodiment, the third pose conversion relationship between the virtual camera coordinate system and the preset coordinate system satisfies the relationship shown in formula (9),
/>
wherein,is the coordinates of point P in the virtual camera coordinate system,/->Is the coordinates of the point P in a preset coordinate system.
S254, determining a fourth pose conversion relation between the entity camera coordinate system and the virtual camera coordinate system based on the first pose conversion relation, the second pose conversion relation and the third pose conversion relation.
Specifically, according to the first pose conversion relationship and the second pose conversion relationship, the relationship shown in the formula (10) can be obtained,
according to the second pose conversion relationship and the third pose conversion relationship, a relationship as shown in formula (11) can be obtained,
Obviously, the formulas (10) and (11) are combined to obtain a fourth pose conversion relation between the physical camera coordinate system and the virtual camera coordinate system, as shown in the formula (12),
wherein H is 1 And is obtained from formula (10), H 2 And is obtained from equation (11).
And S255, correcting the position information of the parabolic track under the physical camera coordinate system based on the fourth pose conversion relationship to obtain the position information of the parabolic track under the virtual camera coordinate system.
In S254, a fourth pose conversion relationship between the physical camera coordinate system and the virtual camera coordinate system has been obtained. Therefore, the coordinates of the parabolic track on the physical coordinate system are input into the formula (12) one by one, and the coordinates of the parabolic track on the virtual camera coordinate system can be obtained. Thereby, the position information of the parabolic track under the virtual camera coordinate system can be obtained.
S26, identifying violent sorting behaviors based on position information of the parabolic track under a virtual camera coordinate system.
In the embodiment of the application, based on the position information of the parabolic track under the virtual camera coordinate system, the starting point coordinate and the end point coordinate of the parabolic track under the virtual camera coordinate system are obtained; determining a parabolic distance based on a start point coordinate and an end point coordinate of the parabolic track under a virtual camera coordinate system; violent sorting actions are identified based on the parabolic distance.
In a specific embodiment, judging whether the parabolic distance is larger than a preset value, if so, judging that the sorting behavior corresponding to the parabolic track is a violent sorting behavior; if not, judging that the sorting behavior corresponding to the parabolic track is not the violent sorting behavior.
According to the violent sorting behavior recognition method, the pose of a virtual camera under a preset coordinate system is calculated based on the pose of a paraboloid under the preset coordinate system, then the position information of a paraboloid track under the coordinate system of the physical camera is corrected based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system, the position information of the paraboloid track under the coordinate system of the virtual camera is obtained, and finally the violent sorting behavior is recognized based on the position information of the paraboloid track under the coordinate system of the virtual camera. Because the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid, the parabolic track shot by the virtual camera can be closer to the real parabolic track than the parabolic track shot by the physical camera, and after the position information of the parabolic track under the physical camera coordinate system is corrected to the position information of the parabolic track under the virtual camera coordinate system, the obtained parabolic track is closer to the real parabolic track, so that the accuracy of identification of violent sorting behaviors can be improved.
In order to better implement the method for identifying the violent sorting behaviors in the embodiment of the present application, on the basis of the method for identifying the violent sorting behaviors, there is further provided a device for identifying the violent sorting behaviors in the embodiment of the present application, as shown in fig. 7, fig. 7 is a schematic diagram of an embodiment of the device for identifying the violent sorting behaviors provided in the embodiment of the present application, where the device for identifying the violent sorting behaviors includes a first obtaining unit 401, a second obtaining unit 402, a third obtaining unit 403, a pose calculating unit 404, a correcting unit 405, and an identifying unit 406:
a first acquiring unit 401 for acquiring a parabolic sorting video;
a second obtaining unit 402, configured to obtain a pose of the physical camera in a preset coordinate system;
a third obtaining unit 403, configured to obtain, based on a plurality of images of the parabolic sorting video, position information of a parabolic track under a physical camera coordinate system and a pose of a paraboloid under a preset coordinate system, where the paraboloid is a plane where the parabolic track is located;
a pose calculating unit 404, configured to calculate a pose of the virtual camera under a preset coordinate system based on a pose of the paraboloid under the preset coordinate system, where an included angle between an optical axis of the virtual camera and the paraboloid is greater than an included angle between an optical axis of the physical camera and the paraboloid;
A correcting unit 405, configured to correct position information of a parabolic track under a coordinate system of the physical camera based on a pose of the physical camera under a preset coordinate system, a pose of the virtual camera under the preset coordinate system, and a pose of the paraboloid under the preset coordinate system, so as to obtain position information of the parabolic track under the virtual camera coordinate system;
the identifying unit 406 is configured to identify violent sorting actions based on position information of the parabolic trajectory in the virtual camera coordinate system.
The first obtaining unit 401 is further configured to obtain a sorting video;
splitting the sorting video to obtain a plurality of sorting sub-videos;
image detection is performed on images in the sorting sub-videos to extract a parabolic sorting video from the plurality of sorting sub-videos.
Wherein, the third obtaining unit 403 is further configured to obtain a throw area;
intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images;
performing image detection on the plurality of intercepted images to obtain position information of the parabolic track under a solid camera coordinate system;
and acquiring the pose of the paraboloid under a preset coordinate system.
The third obtaining unit 403 is further configured to obtain a preset image from the plurality of captured images;
Drawing a parabolic track on a preset image based on the position information of the parabolic track under a solid camera coordinate system to obtain a parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image;
and carrying out regression analysis on the parabolic image to obtain the pose of the paraboloid under a preset coordinate system.
The third obtaining unit 403 is further configured to perform image fusion on a plurality of images in the parabolic sorting video, so as to obtain a fused image;
and performing image detection on the fused image to acquire a throwing area.
The correcting unit 405 is further configured to calculate a first pose conversion relationship between the physical camera coordinate system and the preset coordinate system based on the pose of the physical camera in the preset coordinate system;
calculating a second pose conversion relationship between the parabolic coordinate system of the paraboloid and the preset coordinate system based on the pose of the paraboloid under the preset coordinate system;
calculating a third pose conversion relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera under the preset coordinate system and the pose of the virtual camera under the preset coordinate system;
determining a fourth pose conversion relationship between the entity camera coordinate system and the virtual camera coordinate system based on the first pose conversion relationship, the second pose conversion relationship and the third pose conversion relationship;
And correcting the position information of the parabolic track under the physical camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track under the virtual camera coordinate system.
The identifying unit 406 is further configured to obtain a start point coordinate and an end point coordinate of the parabolic track in the virtual camera coordinate system based on the position information of the parabolic track in the virtual camera coordinate system;
determining a parabolic distance based on a start point coordinate and an end point coordinate of the parabolic track under a virtual camera coordinate system;
violent sorting actions are identified based on the parabolic distance.
According to the violence sorting behavior recognition device, the pose of a virtual camera under a preset coordinate system is calculated based on the pose of a paraboloid under the preset coordinate system, then the position information of a paraboloid track under the coordinate system of the physical camera is corrected based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system, the position information of the paraboloid track under the coordinate system of the virtual camera is obtained, and finally the violence sorting behavior is recognized based on the position information of the paraboloid track under the coordinate system of the virtual camera. Because the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid, the parabolic track shot by the virtual camera can be closer to the real parabolic track than the parabolic track shot by the physical camera, and after the position information of the parabolic track under the physical camera coordinate system is corrected to the position information of the parabolic track under the virtual camera coordinate system, the obtained parabolic track is closer to the real parabolic track, so that the accuracy of identification of violent sorting behaviors can be improved.
The embodiment of the application also provides electronic equipment. As shown in fig. 8, fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in an embodiment of the present application, specifically:
the electronic device may include one or more processing cores 'processors 501, one or more computer-readable storage media's memory 502, a power supply 503, and an input unit 504, among other components. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 8 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the processor 501 is a control center of the electronic device, and connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby performing overall monitoring of the electronic device. Optionally, processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by executing the software programs and modules stored in the memory 502. The memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, memory 502 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 502 may also include a memory controller to provide access to the memory 502 by the processor 501.
The electronic device further comprises a power supply 503 for powering the various components, preferably the power supply 503 is logically connected to the processor 501 via a power management system, whereby the functions of managing charging, discharging, and power consumption are performed by the power management system. The power supply 503 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may further comprise an input unit 504, which input unit 504 may be used for receiving input digital or character information and for generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with user settings and function control.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 501 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 502 according to the following instructions, and the processor 501 executes the application programs stored in the memory 502, so as to implement various functions as follows:
acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under a solid camera coordinate system and pose of a parabolic surface under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located; calculating the pose of the virtual camera under a preset coordinate system based on the pose of the paraboloid under the preset coordinate system, and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid; correcting the position information of the parabolic track under the coordinate system of the entity camera based on the pose of the entity camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system; violent sorting actions are identified based on position information of the parabolic trajectories in the virtual camera coordinate system.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored on a computer-readable storage medium and loaded and executed by a processor.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored on a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, which may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like. On which a computer program is stored, which is loaded by a processor to perform the steps of any of the violent sorting behavior recognition methods provided by the embodiments of the present application. For example, the loading of the computer program by the processor may perform the steps of:
Acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under a solid camera coordinate system and pose of a parabolic surface under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located; calculating the pose of the virtual camera under a preset coordinate system based on the pose of the paraboloid under the preset coordinate system, and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid; correcting the position information of the parabolic track under the coordinate system of the entity camera based on the pose of the entity camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system; violent sorting actions are identified based on position information of the parabolic trajectories in the virtual camera coordinate system.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of one embodiment that are not described in detail in the foregoing embodiments may be referred to in the foregoing detailed description of other embodiments, which are not described herein again.
In the implementation, each unit or structure may be implemented as an independent entity, or may be implemented as the same entity or several entities in any combination, and the implementation of each unit or structure may be referred to the foregoing method embodiments and will not be repeated herein.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
The foregoing has described in detail the methods, apparatuses and computer readable storage medium for identifying violent sorting actions provided by the embodiments of the present application, and specific examples have been applied herein to illustrate the principles and embodiments of the present application, where the foregoing examples are provided to assist in understanding the methods and core ideas of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (9)

1. A method for identifying violent sorting behaviors, the method comprising:
acquiring a parabolic sorting video;
acquiring the pose of the entity camera under a preset coordinate system;
acquiring position information of a parabolic track under a solid camera coordinate system and a pose of a parabolic surface under a preset coordinate system based on a plurality of images of the parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located;
calculating the pose of the virtual camera under a preset coordinate system based on the pose of the paraboloid under the preset coordinate system, and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the solid camera and the paraboloid;
Correcting the position information of the parabolic track under the physical camera coordinate system based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system;
identifying violent sorting behaviors based on position information of the parabolic track under the virtual camera coordinate system;
the correcting the position information of the parabolic track under the physical camera coordinate system based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system includes:
calculating a first pose conversion relationship between the entity camera coordinate system and the preset coordinate system based on the pose of the entity camera under the preset coordinate system;
calculating a second pose conversion relation between a parabolic coordinate system of the paraboloid and the preset coordinate system based on the pose of the paraboloid under the preset coordinate system;
Calculating a third pose conversion relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera under the preset coordinate system and the pose of the entity camera under the preset coordinate system;
determining a fourth pose conversion relationship between the entity camera coordinate system and the virtual camera coordinate system based on the first pose conversion relationship, the second pose conversion relationship and a third pose conversion relationship;
and correcting the position information of the parabolic track under the solid camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track under the virtual camera coordinate system.
2. The method of claim 1, wherein the acquiring a parabolic sorting video comprises:
acquiring a sorting video;
splitting the sorting video to obtain a plurality of sorting sub-videos;
and performing image detection on the images in the sorting sub-videos to extract parabolic sorting videos from the plurality of sorting sub-videos.
3. The violent sorting behavior recognition method according to claim 1, wherein the acquiring, based on the plurality of images of the parabolic sorting video, the positional information of the parabolic trajectory in the solid camera coordinate system and the pose of the paraboloid in the preset coordinate system includes:
Acquiring a throwing area;
intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images;
performing image detection on the plurality of intercepted images to obtain position information of the parabolic track under a solid camera coordinate system;
and acquiring the pose of the paraboloid under a preset coordinate system.
4. A method of identifying violent sorting actions according to claim 3, wherein said obtaining the pose of the paraboloid in a preset coordinate system comprises:
acquiring a preset image from the plurality of intercepted images;
drawing the parabolic track on the preset image based on the position information of the parabolic track under a solid camera coordinate system so as to acquire a parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image;
and carrying out regression analysis on the parabolic image to obtain the pose of the parabolic surface under a preset coordinate system.
5. A method of identifying violent sorting actions according to claim 3, wherein the acquiring a throw area comprises:
performing image fusion on a plurality of images in the parabolic sorting video to obtain a fusion image;
And performing image detection on the fused image to acquire the throwing area.
6. The violent sorting behavior recognition method according to claim 1, wherein the recognizing violent sorting behavior based on the positional information of the parabolic trajectory in the virtual camera coordinate system includes:
acquiring a starting point coordinate and an ending point coordinate of the parabolic track under the virtual camera coordinate system based on the position information of the parabolic track under the virtual camera coordinate system;
determining a parabolic distance based on a start point coordinate and an end point coordinate of the parabolic track under the virtual camera coordinate system;
violent sorting actions are identified based on the parabolic distance.
7. An apparatus for identifying violent sorting actions, wherein the identification of violent sorting actions comprises:
the first acquisition unit is used for acquiring a parabolic sorting video;
the second acquisition unit is used for acquiring the pose of the entity camera under a preset coordinate system;
the third acquisition unit is used for acquiring position information of a parabolic track under a solid camera coordinate system and pose of a parabolic surface under a preset coordinate system based on a plurality of images of the parabolic sorting video, wherein the parabolic surface is a plane where the parabolic track is located;
The pose calculating unit is used for calculating the pose of the virtual camera under the preset coordinate system based on the pose of the paraboloid under the preset coordinate system and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid;
the correcting unit is used for correcting the position information of the parabolic track under the physical camera coordinate system based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system;
the identification unit is used for identifying violent sorting behaviors based on the position information of the parabolic track under the virtual camera coordinate system;
the correcting the position information of the parabolic track under the physical camera coordinate system based on the pose of the physical camera under the preset coordinate system, the pose of the virtual camera under the preset coordinate system and the pose of the paraboloid under the preset coordinate system to obtain the position information of the parabolic track under the virtual camera coordinate system includes:
Calculating a first pose conversion relationship between the entity camera coordinate system and the preset coordinate system based on the pose of the entity camera under the preset coordinate system;
calculating a second pose conversion relation between a parabolic coordinate system of the paraboloid and the preset coordinate system based on the pose of the paraboloid under the preset coordinate system;
calculating a third pose conversion relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera under the preset coordinate system and the pose of the entity camera under the preset coordinate system;
determining a fourth pose conversion relationship between the entity camera coordinate system and the virtual camera coordinate system based on the first pose conversion relationship, the second pose conversion relationship and a third pose conversion relationship;
and correcting the position information of the parabolic track under the solid camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track under the virtual camera coordinate system.
8. An electronic device, the electronic device comprising:
one or more processors;
A memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the violent sorting behavior identification method of any one of claims 1 to 6.
9. A computer readable storage medium, having stored thereon a computer program, the computer program being loaded by a processor to perform the steps of the violent sorting behavior identification method of any one of claims 1 to 6.
CN201911369379.2A 2019-12-26 2019-12-26 Violent sorting behavior identification method and device and computer readable storage medium Active CN113051968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911369379.2A CN113051968B (en) 2019-12-26 2019-12-26 Violent sorting behavior identification method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911369379.2A CN113051968B (en) 2019-12-26 2019-12-26 Violent sorting behavior identification method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113051968A CN113051968A (en) 2021-06-29
CN113051968B true CN113051968B (en) 2024-03-01

Family

ID=76505564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911369379.2A Active CN113051968B (en) 2019-12-26 2019-12-26 Violent sorting behavior identification method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113051968B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106000904A (en) * 2016-05-26 2016-10-12 北京新长征天高智机科技有限公司 Automatic sorting system for household refuse
WO2017027172A1 (en) * 2015-08-13 2017-02-16 Google Inc. Systems and methods to transition between viewpoints in a three-dimensional environment
CN107358194A (en) * 2017-07-10 2017-11-17 南京邮电大学 A kind of violence sorting express delivery determination methods based on computer vision
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109126121A (en) * 2018-06-01 2019-01-04 成都通甲优博科技有限责任公司 AR terminal interconnected method, system, device and computer readable storage medium
CN110332887A (en) * 2019-06-27 2019-10-15 中国地质大学(武汉) A kind of monocular vision pose measurement system and method based on characteristic light punctuate

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017027172A1 (en) * 2015-08-13 2017-02-16 Google Inc. Systems and methods to transition between viewpoints in a three-dimensional environment
CN106000904A (en) * 2016-05-26 2016-10-12 北京新长征天高智机科技有限公司 Automatic sorting system for household refuse
CN107358194A (en) * 2017-07-10 2017-11-17 南京邮电大学 A kind of violence sorting express delivery determination methods based on computer vision
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109126121A (en) * 2018-06-01 2019-01-04 成都通甲优博科技有限责任公司 AR terminal interconnected method, system, device and computer readable storage medium
CN110332887A (en) * 2019-06-27 2019-10-15 中国地质大学(武汉) A kind of monocular vision pose measurement system and method based on characteristic light punctuate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于半径约束的空间圆弧位姿视觉检测算法;刘凌云;罗敏;吴岳敏;李慧玲;马彬;;组合机床与自动化加工技术(01);全文 *

Also Published As

Publication number Publication date
CN113051968A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
Huang et al. Efficient image stitching of continuous image sequence with image and seam selections
WO2020184207A1 (en) Object tracking device and object tracking method
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
Wang et al. Tracking by joint local and global search: A target-aware attention-based approach
CN110555838A (en) Image-based part fault detection method and device
WO2020107326A1 (en) Lane line detection method, device and computer readale storage medium
CN111798487A (en) Target tracking method, device and computer readable storage medium
KR102226843B1 (en) System and method for object detection
CN112669275A (en) PCB surface defect detection method and device based on YOLOv3 algorithm
CN108961316A (en) Image processing method, device and server
CN113326836B (en) License plate recognition method, license plate recognition device, server and storage medium
CN113051968B (en) Violent sorting behavior identification method and device and computer readable storage medium
WO2021138893A1 (en) Vehicle license plate recognition method and apparatus, electronic device, and storage medium
CN115471439A (en) Method and device for identifying defects of display panel, electronic equipment and storage medium
CN117173439A (en) Image processing method and device based on GPU, storage medium and electronic equipment
CN111050027B (en) Lens distortion compensation method, device, equipment and storage medium
JP5930808B2 (en) Image processing apparatus, image processing apparatus control method, and program
CN112532884A (en) Identification method and device and electronic equipment
US11314968B2 (en) Information processing apparatus, control method, and program
CN115619698A (en) Method and device for detecting defects of circuit board and model training method
CN116208842A (en) Video processing method, apparatus, device and computer readable storage medium
CN113538449A (en) Image correction method, device, server and storage medium
CN114820786A (en) Object hitting method and device, terminal equipment and readable storage medium
KR102648852B1 (en) Taffic System that Can Change the Direction of the Crackdown
CN111586299B (en) Image processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant