CN111540009A - Method, apparatus, electronic device, and medium for generating detection information - Google Patents

Method, apparatus, electronic device, and medium for generating detection information Download PDF

Info

Publication number
CN111540009A
CN111540009A CN202010329283.XA CN202010329283A CN111540009A CN 111540009 A CN111540009 A CN 111540009A CN 202010329283 A CN202010329283 A CN 202010329283A CN 111540009 A CN111540009 A CN 111540009A
Authority
CN
China
Prior art keywords
ball
video
mapping
detection information
desktop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010329283.XA
Other languages
Chinese (zh)
Inventor
陈家泽
李磊
孙照月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010329283.XA priority Critical patent/CN111540009A/en
Publication of CN111540009A publication Critical patent/CN111540009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image
    • G06T2207/30224Ball; Puck

Abstract

Embodiments of the present disclosure disclose methods, apparatuses, electronic devices, and media for generating detection information. One embodiment of the method comprises: acquiring desktop ball videos; determining the positions of the table and the ball displayed in the video of the table tennis; mapping the table and the ball to a preset display plane based on the positions of the table and the ball to obtain a mapping image; and generating detection information according to the obtained mapping image. The embodiment realizes the generation of the inspection information, enriches the competition information and provides convenience for commentators to explain the competition.

Description

Method, apparatus, electronic device, and medium for generating detection information
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for generating detection information.
Background
With the development of the times, more and more people choose to watch videos of table tennis games. Table tennis matches are typically accompanied by commentators commentary on the game. Because the video of the table tennis game may have a blind sight, most commentators also use a view simulation mode to explain the progress of the game. Further, during the view simulation, information about tables and balls corresponding to the video of the table ball game needs to appear in the simulated view to enrich the game information and to contribute to the commentary of the commentator.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a method, apparatus, electronic device, and medium for generating detection information to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for generating detection information, the method comprising: acquiring desktop ball videos; determining the positions of the table and the ball displayed in the video of the table tennis; mapping the table and the ball to a preset display plane based on the positions of the table and the ball to obtain a mapping image; and generating detection information according to the obtained mapping image.
In a second aspect, some embodiments of the present disclosure provide an apparatus for generating detection information, the apparatus comprising: an acquisition unit configured to acquire a video of a desktop ball class; a determining unit configured to determine positions of a table and a ball displayed in the video of the table tennis; a mapping unit configured to map the table and the ball to a predetermined display plane based on positions of the table and the ball to obtain a mapping image; a generating unit configured to generate detection information based on the obtained mapping image.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method as described in the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method as described in the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: and determining the positions of the table and the ball displayed in the desktop video through the acquired desktop video. And then, mapping the table and the ball displayed in the desktop ball video to a preset display plane to obtain a mapping image. The game matching situation corresponding to the desktop ball video can be simulated. Therefore, the situation that the position judgment error of the ball is large due to the sight blind area can be avoided to a great extent. Detection information is generated from the obtained mapping image. The game information is enriched, and the commentary of the commentator on the game is facilitated.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
1-2 are schematic diagrams of one application scenario of methods for generating detection information of some embodiments of the present disclosure;
FIG. 3 is a flow diagram of some embodiments of a method for generating detection information according to the present disclosure;
FIG. 4 is a flow diagram of further embodiments of methods for generating detection information according to the present disclosure;
FIG. 5 is a schematic diagram of one application scenario of a method for generating detection information of some embodiments of the present disclosure;
6-8 are schematic illustrations of a mapping image of a method for generating detection information of some embodiments of the present disclosure;
FIG. 9 is a schematic block diagram of some embodiments of an apparatus for generating detection information according to the present disclosure;
FIG. 10 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1-2 are a number of schematic diagrams of application scenarios of methods for generating detection information according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may input the acquired desktop video 102 into the pre-trained image detection model 103 to obtain the positions 104 of the table and the ball displayed in the desktop video 102. The computing device 101 then maps the table and ball displayed in the table ball video 102 according to the table and ball position 104, resulting in a mapped image 105 as shown in fig. 2. Finally, the computing device 101 generates detection information 106 from the obtained mapping image 105.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 3, a flow 300 of some embodiments of a method for generating detection information in accordance with the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The method for generating the detection information comprises the following steps:
step 301, obtaining desktop ball videos.
In some embodiments, the executing entity (e.g., computing device 101 shown in fig. 1) of the method for generating detection information may obtain the video of the desktop ball class through a wired connection or a wireless connection. The above-mentioned table ball video may be a video including video contents of a table ball game or a game. For example, the execution main body may receive a video input by a user as the desktop ball video. For another example, the execution subject may obtain a video from a local video library as the desktop ball video. For another example, the execution main body may be connected to another electronic device in a wired connection manner or a wireless connection manner, and obtain a video in a video library of the connected electronic device as the desktop video.
It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
Step 302, determine the positions of the table and the ball displayed in the above-mentioned table tennis.
In some embodiments, an executing subject of the method for generating detection information (e.g., the computing device 101 shown in fig. 1) may perform image detection on the above-described desktop ball video. Here, the image detection is generally used to detect the positions of the table and the ball on the table displayed in the video of the table ball. For example, the executive body may input the video of the table ball to a pre-trained image detection model to obtain the table and the position of the ball on the table.
In some alternative implementations of some embodiments, the image detection model may be a pre-trained neural network model for determining the position of the table and ball. The video to be detected of the samples in the training sample set can be used as input, the sample position corresponding to the video to be detected of the samples is used as expected output, and the image detection model is obtained through training.
Step 303, mapping the table and the ball to a predetermined display plane based on the positions of the table and the ball to obtain a mapping image.
In some embodiments, the executing entity may map the table and the ball displayed in the video of the desktop ball game to the predetermined display plane according to the positions of the table and the ball obtained in step 302. Here, the mapping may be to project an image on one plane onto another plane by means of affine transformation. The executing body can project the table and the ball onto a preset display plane according to the obtained positions of the table and the ball, and a mapping image can be obtained. Here, the map image may be a planar image that is presented in accordance with the top view angle of the table.
In step 304, detection information is generated based on the obtained mapping image.
In some embodiments, the executing entity may establish a coordinate system on the mapping image obtained in step 303, with the length and width of the table displayed in the desktop video as coordinate axes. The execution body may calculate coordinates of a point where the ball is located from the position of the ball, and may perform processing of a predetermined format on the obtained coordinate information of the point where the ball is located to obtain the detection information. The detection information may be information for describing the position of the ball on the table. The predetermined format may be a writing format in which the detection information is set in advance. For example, the predetermined format may be "number/name of ball + 'predetermined text' + coordinate information".
As an example, the execution subject calculates that the coordinates of the point where the "ball No. 1" is located are (165, 136), and the coordinates of the point where the "ball No. 2" is located are (298, 549). Then, the detection information may be that the position coordinates of ball # 1' are (165, 136); the position coordinates of 'ball 2' are (298, 549) ".
One of the above-described various embodiments of the present disclosure has the following advantageous effects: and determining the positions of the table and the ball displayed in the desktop video through the acquired desktop video. And then, mapping the table and the ball displayed in the desktop ball video to a preset display plane to obtain a mapping image. The game matching situation corresponding to the desktop ball video can be simulated. Therefore, the situation that the position judgment error of the ball is large due to the sight blind area can be avoided to a great extent. Detection information is generated from the obtained mapping image. The game information is enriched, and the commentary of the commentator on the game is facilitated.
With continued reference to fig. 4, a flow 400 of further embodiments of methods for generating detection information according to the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The method for generating the detection information comprises the following steps:
step 401, a target video is obtained.
In some embodiments, the executing subject of the method for generating detection information (e.g., the computing device 101 shown in fig. 1) may obtain the target video through a wired connection or a wireless connection. The target video may be a video including video contents of a table game or a game. For example, the execution body may receive a video input by a user as the target video. For another example, the execution subject may obtain a video from a local video library as the target video. For another example, the execution main body may connect to another electronic device in a wired connection manner or a wireless connection manner, and acquire a video in a video library of the connected electronic device as the target video.
It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
Step 402, detecting a video clip showing a table from the target video.
In some embodiments, the executing entity may perform image detection on the target video to determine a video segment of the target video in which the table is displayed. Here, the image detection may be detection for determining whether a predetermined image is included in a video or picture to be detected. The execution main body may perform frame-by-frame detection on the target video, and determine a portion of the target video from a position where a table appears to a position where the table does not appear in the target video as the video clip.
And step 403, intercepting the video clip from the target video as a desktop ball video.
In some embodiments, the executing entity may intercept the video clip with the table displayed in the target video determined in step 402 to obtain the video of the table tennis category.
As an example, the executing body may input the target video to a pre-trained video capture model, and obtain a captured video as the desktop video.
Step 404, determining the positions of the table and the ball displayed in the video of the table tennis.
In some embodiments, the specific implementation of step 404 and the technical effect thereof may refer to step 302 in those embodiments corresponding to fig. 3, and are not described herein again.
As an example, in the application scenario of fig. 5, first, the computing device 101 may intercept the target video 502 to obtain the video 503 to be detected. The computing device 101 may then input the video 503 to be detected into the pre-trained image detection model 504, resulting in the table and ball positions 505 displayed in the video 503 to be detected.
Step 405, determining the shooting angle of view of the desktop ball video based on the determined position of the table.
In some embodiments, based on the determined position of the table, the executive body may scan an area covered by the table cloth on the table by using a scanning imaging method, and perform a delineation display on the area. Here, the stroking display may be a display in which a frame is added to the target region. The execution subject can obtain an area map after the stroking display. The execution body may determine the photographing view angle of the video of the table tennis class in various ways. For example, the executing entity may input the obtained area map after the stroking display to a pre-trained view angle determination model to obtain the shooting view angle of the desktop ball video. As an example, the executing entity may compare the shapes of the obtained area maps after the depicting display, and determine the shooting angle of view of the desktop ball video according to the comparison result. For example, the comparison result is "parallelogram", and the shooting angle of view of the desktop ball video may be determined as "oblique angle". For another example, if the comparison result is "trapezoid", it may be determined that the shooting angle of view of the desktop video is "front angle" or "side angle".
And 406, mapping the table and the ball to a preset display plane to obtain a mapping image based on the position of the ball, the shooting visual angle of the desktop ball video, the mapped visual angle and a preset mapping relation.
In some embodiments, the executing body may first project the table to a predetermined display plane according to the shooting angle of view of the video of the table tennis, the mapped angle of view, and a predetermined mapping relationship. Then, the executing body projects the ball onto a predetermined display plane by using a matrix transformation method according to the position of the ball obtained in the step 404 and a predetermined mapping relationship, so as to obtain a mapping image. The mapped angle of view may be an angle of a mapped image with respect to a vertical direction of a table displayed in the video of the table tennis. The predetermined mapping relationship may be a predetermined function for representing an angular relationship between the shooting angle of view and the predetermined angle of view.
In response to the change in the position of the ball, the position of the ball in the mapping image is changed accordingly. The predetermined viewing angle may be a preset viewing angle that meets the actual requirements. Such as a top view angle. The predetermined mapping relationship may be a predetermined function for characterizing an angular relationship between the photographing viewing angle and the predetermined viewing angle.
Step 407, determining the motion trajectory of the ball according to the obtained mapping image.
In some embodiments, the executing entity may first establish a coordinate system on the mapping image by using the length and the width of the table displayed in the desktop video as coordinate axes. Then, the execution body may calculate coordinates of a coordinate point representing the ball according to the position of the ball. In response to the coordinate of the coordinate point changing, the execution subject may obtain the first coordinate before the change. The execution body may randomly intercept at least one second coordinate during a process in which the coordinates of the coordinate point are changed. In response to that the coordinates of the coordinate point are not changed any more, the execution subject may obtain the changed third coordinates. And connecting the coordinate points of the first coordinate, each second coordinate and the third coordinate in a straight line to obtain the moving track of the coordinate point. The execution body may represent a movement locus of the ball by a movement locus of the coordinate point.
As an example, the coordinates of the corresponding coordinate point before the ball moves are (136,185), and the coordinates of the corresponding coordinate point when the ball stops moving are (169,267). Then, a line segment connecting the two coordinate points with a straight line may be a movement locus of the ball.
As an example, a mapping image as shown in fig. 6 may be obtained according to the step 407. In response to the change in the position (coordinates) of "ball 1", it is obtained that "ball 1" starts moving, hitting "ball 2" as shown in fig. 7. The directions indicated by the arrows are the moving directions of the ball 1 and the ball 2, respectively. The line segments connecting the start point to the end point of the arrow are the motion trajectories of "ball No. 1" and "ball No. 2", respectively. In response to the fact that the positions (coordinates) of the "ball 1" and the "ball 2" are no longer changed, a mapping image as shown in fig. 8 can be obtained.
And step 408, carrying out color detection on the ball to determine the color of the ball.
In some embodiments, the executing body may perform color detection on the ball by using a color detector to determine the color of the ball.
And step 409, generating detection information based on the color of the ball and the motion trail of the ball.
In some embodiments, the executing body may determine whether the ball is collided according to whether a straight line where the movement locus of the ball is located has an intersection. The execution main body can perform text description on the color and the collision condition of the ball to obtain detection information. For example, the execution main body determines that the moving direction of the white ball is the direction of the black ball, based on the moving trajectory of the white ball and the position of the black ball. And determining that the white ball and the black ball collide according to the intersection point of the straight lines where the motion trail of the white ball and the motion trail of the black ball are located. Then, the detection information may be "white ball motion hits black ball".
In some optional implementations of some embodiments, it may be determined whether the ball is within a preset area according to coordinates of coordinate points used to characterize the ball; in response to determining yes, detection information is generated. The predetermined area may be an area in which a pocket is located that is predetermined to characterize the table. For example, the detection information may be "white ball movement hits a yellow ball pocket" in response to the coordinates of the coordinate points for characterizing the ball being within a preset area as described above.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: firstly, detecting and intercepting a target video to obtain a desktop ball video. The workload of detecting the video can be reduced. And detecting the desktop ball video to obtain the positions of the ball and the table displayed in the desktop ball video. Further, the shooting angle of view of the desktop ball video can be determined. And mapping the displayed ball and the table to a preset display plane to obtain a mapping image. Since the position of the ball changes, the obtained mapping image also changes accordingly. The game situation corresponding to the snooker table tennis frequency can be vividly simulated. Thus, the generated detection information is more helpful for the commentator to explain the game.
With further reference to fig. 9, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of an apparatus for generating detection information, which correspond to those of the method embodiments described above with reference to fig. 3, and which may be applied in various electronic devices in particular.
As shown in fig. 9, an apparatus 900 for generating detection information of some embodiments includes: an acquisition unit 901, a determination unit 902, a mapping unit 903, and a generation unit 904. Wherein, the obtaining unit 901 is configured to obtain a video of a desktop ball class; a determining unit 902 configured to determine positions of a table and a ball displayed in the video of the table tennis; a mapping unit 903 configured to map the table and the ball to a predetermined display plane based on the positions of the table and the ball to obtain a mapping image; a generating unit 904 configured to generate detection information based on the obtained mapping image.
In some optional implementations of some embodiments, the obtaining unit 901 of the apparatus 900 for generating detection information is further configured to: acquiring a target video; detecting a video clip showing a table from the target video; and intercepting the video clips from the target video to be used as desktop ball videos.
In some optional implementations of some embodiments, the mapping unit 903 of the apparatus 900 for generating detection information is further configured to: determining a shooting visual angle of the desktop ball video based on the determined position of the table; and mapping the table and the ball to a preset display plane based on the position of the ball, the shooting visual angle, the mapping visual angle and a preset mapping relation to obtain a mapping image.
In some optional implementations of some embodiments, the means 900 for generating detection information further comprises: and carrying out color detection on the ball to determine the color of the ball.
In some optional implementations of some embodiments, the generating unit 904 of the apparatus 900 for generating detection information is further configured to: determining the motion track of the ball according to the mapping image; and generating detection information based on the color of the ball and the motion trail.
It will be understood that the elements described in the apparatus 900 correspond to various steps in the method described with reference to fig. 3. Thus, the operations, features, and advantages described above with respect to the method are also applicable to the apparatus 900 and the units included therein, and are not described herein again.
Referring now to FIG. 10, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)1000 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Generally, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 1007 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 1008 including, for example, magnetic tape, hard disk, and the like; and a communication device 1009. The communication device 1009 may allow the electronic device 1000 to communicate with other devices wirelessly or by wire to exchange data. While fig. 10 illustrates an electronic device 1000 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 10 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 1009, or installed from the storage device 1008, or installed from the ROM 1002. The computer program, when executed by the processing apparatus 1001, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring desktop ball videos; determining the positions of the table and the ball displayed in the video of the table tennis; mapping the table and the ball to a preset display plane based on the positions of the table and the ball to obtain a mapping image; and generating detection information according to the obtained mapping image.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a determination unit, a mapping unit, and a generation unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, the capture unit may also be described as a "unit that captures video of a table tennis category".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In accordance with one or more embodiments of the present disclosure, there is provided a method for generating detection information, including: acquiring desktop ball videos; determining the positions of the table and the ball displayed in the video of the table tennis; mapping the table and the ball to a preset display plane based on the positions of the table and the ball to obtain a mapping image; and generating detection information according to the obtained mapping image.
According to one or more embodiments of the present disclosure, the obtaining a video of a table tennis game includes: acquiring a target video; detecting a video clip showing a table from the target video; and intercepting the video clips from the target video to be used as desktop ball videos.
According to one or more embodiments of the present disclosure, the mapping the table and the ball to a predetermined display plane based on the determined positions of the table and the ball to obtain a mapping image includes: determining a shooting visual angle of the desktop ball video based on the determined position of the table; and mapping the table and the ball to a preset display plane based on the position of the ball, the shooting visual angle, the mapping visual angle and a preset mapping relation to obtain a mapping image.
According to one or more embodiments of the present disclosure, the method further includes: and carrying out color detection on the ball to determine the color of the ball.
According to one or more embodiments of the present disclosure, the generating detection information according to the obtained mapping image includes: determining the motion track of the ball according to the mapping image; and generating detection information based on the color of the ball and the motion trail.
According to one or more embodiments of the present disclosure, there is provided an apparatus for generating detection information, including: an acquisition unit configured to acquire a video of a desktop ball class; a determining unit configured to determine positions of a table and a ball displayed in the video of the table tennis; a mapping unit configured to map the table and the ball to a predetermined display plane based on positions of the table and the ball to obtain a mapping image; a generating unit configured to generate detection information based on the obtained mapping image.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as described in any of the embodiments above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the embodiments above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (8)

1. A method for generating detection information, comprising:
acquiring desktop ball videos;
determining the positions of a table and a ball displayed in the desktop ball video;
mapping the table and the ball to a preset display plane based on the positions of the table and the ball to obtain a mapping image;
and generating detection information according to the obtained mapping image.
2. The method of claim 1, wherein said obtaining a video of a table tennis class comprises:
acquiring a target video;
detecting a video clip with a table displayed in the target video;
and intercepting the video clip from the target video to be used as a desktop ball video.
3. The method of claim 1, wherein said mapping the table and ball to a predetermined presentation plane based on the determined position of the table and ball results in a mapped image comprising:
determining a shooting view angle of the table tennis video based on the determined position of the table;
and mapping the table and the ball to a preset display plane based on the position of the ball, the shooting visual angle, the mapping visual angle and a preset mapping relation to obtain a mapping image.
4. The method of claim 3, wherein the method further comprises:
and carrying out color detection on the ball to determine the color of the ball.
5. The method of claim 4, wherein the generating detection information from the obtained mapping image comprises:
determining the motion track of the ball according to the mapping image;
and generating detection information based on the color of the ball and the motion trail.
6. An apparatus for generating detection information, comprising:
an acquisition unit configured to acquire a video of a desktop ball class;
a determining unit configured to determine positions of a table and a ball displayed in the video of the table tennis;
the mapping unit is configured to map the table and the ball to a preset display plane based on the positions of the table and the ball to obtain a mapping image;
a generating unit configured to generate detection information based on the obtained mapping image.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
8. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
CN202010329283.XA 2020-04-23 2020-04-23 Method, apparatus, electronic device, and medium for generating detection information Pending CN111540009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010329283.XA CN111540009A (en) 2020-04-23 2020-04-23 Method, apparatus, electronic device, and medium for generating detection information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010329283.XA CN111540009A (en) 2020-04-23 2020-04-23 Method, apparatus, electronic device, and medium for generating detection information

Publications (1)

Publication Number Publication Date
CN111540009A true CN111540009A (en) 2020-08-14

Family

ID=71977199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010329283.XA Pending CN111540009A (en) 2020-04-23 2020-04-23 Method, apparatus, electronic device, and medium for generating detection information

Country Status (1)

Country Link
CN (1) CN111540009A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529941A (en) * 2020-12-17 2021-03-19 深圳市普汇智联科技有限公司 Multi-target tracking method and system based on depth trajectory prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102792685A (en) * 2010-04-12 2012-11-21 住友重机械工业株式会社 Processing target image generation device, processing target image generation method, and operation support system
CN103414870A (en) * 2013-07-16 2013-11-27 南京师范大学 Multiple-mode alert analysis method
CN103871078A (en) * 2013-07-12 2014-06-18 北京瑞盖科技有限公司 Billiard ball hitting key information detection method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102792685A (en) * 2010-04-12 2012-11-21 住友重机械工业株式会社 Processing target image generation device, processing target image generation method, and operation support system
CN103871078A (en) * 2013-07-12 2014-06-18 北京瑞盖科技有限公司 Billiard ball hitting key information detection method and system
CN103414870A (en) * 2013-07-16 2013-11-27 南京师范大学 Multiple-mode alert analysis method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529941A (en) * 2020-12-17 2021-03-19 深圳市普汇智联科技有限公司 Multi-target tracking method and system based on depth trajectory prediction

Similar Documents

Publication Publication Date Title
CN110058685B (en) Virtual object display method and device, electronic equipment and computer-readable storage medium
US9662564B1 (en) Systems and methods for generating three-dimensional image models using game-based image acquisition
CN112733820B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN111882634A (en) Image rendering method, device and equipment and storage medium
WO2020155915A1 (en) Method and apparatus for playing back audio
CN112333491A (en) Video processing method, display device and storage medium
CN111612852A (en) Method and apparatus for verifying camera parameters
CN110866977A (en) Augmented reality processing method, device and system, storage medium and electronic equipment
WO2023125365A1 (en) Image processing method and apparatus, electronic device, and storage medium
WO2020253716A1 (en) Image generation method and device
CN110781823A (en) Screen recording detection method and device, readable medium and electronic equipment
CN110969159B (en) Image recognition method and device and electronic equipment
CN111832579A (en) Map interest point data processing method and device, electronic equipment and readable medium
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN111540009A (en) Method, apparatus, electronic device, and medium for generating detection information
CN106919260B (en) Webpage operation method and device
CN109840059B (en) Method and apparatus for displaying image
CN109816791B (en) Method and apparatus for generating information
CN109816670B (en) Method and apparatus for generating image segmentation model
CN112183388A (en) Image processing method, apparatus, device and medium
CN112068703A (en) Target object control method and device, electronic device and storage medium
CN111586295B (en) Image generation method and device and electronic equipment
CN111385460A (en) Image processing method and device
US20210264673A1 (en) Electronic device for location-based ar linking of object-based augmentation contents and operating method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information