US20180247054A1 - Method and system for piracy detection - Google Patents
Method and system for piracy detection Download PDFInfo
- Publication number
- US20180247054A1 US20180247054A1 US15/444,351 US201715444351A US2018247054A1 US 20180247054 A1 US20180247054 A1 US 20180247054A1 US 201715444351 A US201715444351 A US 201715444351A US 2018247054 A1 US2018247054 A1 US 2018247054A1
- Authority
- US
- United States
- Prior art keywords
- content items
- correlation
- machine learning
- learning system
- pirated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000001514 detection method Methods 0.000 title description 8
- 238000010801 machine learning Methods 0.000 claims abstract description 40
- 238000012549 training Methods 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 230000002596 correlated effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 8
- 239000007787 solid Substances 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000007635 classification algorithm Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 239000006227 byproduct Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6272—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database by registering files or documents with a third party
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
Definitions
- the present disclosure generally relates to methods and systems for comparing versions of content files.
- Copyright holders seek to identify copyright violations which occur when copyrighted content, such as a copyrighted video, is pirated. Such content, to which access has been made available in violation of copyright, may be referred to as hacked video, hacked content, rogue content, pirated content, or other similar terms.
- FIG. 1 is a simplified block diagram of a comparison of an original content item with similar content items, as performed by a comparison system which is constructed and operative in accordance with an embodiment of the present invention
- FIG. 2 is a depiction of an exemplary comparison of content items using the comparison of FIG. 1 ;
- FIGS. 3-5 are a series of block diagrams detailing a method of determining a correlation graph, similar to a correlation graph depicted in FIG. 2 ;
- FIG. 6 is a correlation graph resulting from the example of FIGS. 3-5 ;
- FIG. 7 is a depiction of a plurality of correlation graphs, such as the correlation graph of FIG. 6 , for inputting into a machine learning system;
- FIG. 8 is a block diagram of an exemplary device comprising one or both of the machine learning system and a comparator which performs the comparison of the content items as described above with reference to FIGS. 2-6 ;
- FIG. 9 is a simplified flow chart diagram of a method for an embodiment of the system FIG. 1 .
- a system, apparatus and a method including, a storage device and a memory operative to store a plurality of target content items, a comparator operative to compare each one content item of the plurality of target content items with the other content items of the plurality of target content items, and, at least on the basis of comparing each one content item of the plurality of target content items with the other content items of the plurality of target content items, to develop a correlation graph indicating a level of correlation between each one content item of the plurality of target content items and the other content items of the plurality of target content items, and a machine learning system operative to receive, as an input, the correlation graph and to output a decision, on the basis of the level of correlation shown in the correlation graph, which indicates if the content items represented in the correlation graph are pirated content items or are not pirated content items.
- a storage device and a memory operative to store a plurality of target content items
- a comparator operative to compare each one content item of the plurality of target content items with the other content
- FIG. 1 is a simplified block diagram of a comparison of an original content item with similar content items, as performed by a comparison system which is constructed and operative in accordance with an embodiment of the present invention.
- a first content item 110 designated in FIG. 1 as an “Original Content” item, is depicted.
- a number of additional versions 120 , 130 , 140 of the content item are also depicted, designated in FIG. 1 , respectively, as V 1 , V 2 , V n .
- the additional versions V 1 120 , V 2 130 , and V n 140 of the Original Content item 110 are pirated copies of the Original Content item 110 .
- Dotted arrows 160 indicate that the additional versions V 1 120 , V 2 130 , and V n 140 are somehow related to the Original Content item 110 .
- the path of the relationship may be direct, as indicated by the solid arrow 170 between additional version V 2 130 and the Original Content item 110 .
- the relationship between the Original Content item 110 and one of the additional versions V 1 120 , and V n 140 may be indirect, as indicated by solid arrows, indicating a direct relationship between V 2 130 and V n 140 (arrow 180 ), and a second direct relationship between V n 140 and V 1 120 (arrow 190 ). Accordingly, as depicted in FIG.
- the relationship between various content items 110 , 120 , 130 , and 140 is: Original Content item 110 -V 2 130 -V n 140 -V 1 120 , where the dashes (i.e., “-”) may be understood as showing a “chain” of direct relationships.
- Persons who attempt to gain unauthorized access to copyrighted video are sometimes referred to as “Hackers,” “Rogues,” or “Pirates”.
- Such content, to which a hacker, rogue, or pirate has gained unauthorized or illegal access may be referred to as hacked content, rogue content, pirated content, or other similar terms.
- Pirates may attempt to distribute the content over rogue content distribution networks, peer-to-peer file sharing methods, and so forth. It is often the case that pirated copies of the copyrighted content have been somehow manipulated using various methods, which are typically known methods, in order to make automated detection of pirated copies difficult.
- Such manipulations include, for example: change of color, cropping, rotation/translation, audio mute/swap.
- Methods for comparison of versions of files of content are discussed below, particularly with reference to FIGS. 2-5 .
- Content may include video content, audio content, or other formats of content which are suitable for consumption.
- a level of correspondence between two files which have been differently manipulated may be used to determine a level of confidence that the various versions of the content item (such as V 1 120 , V 2 130 , and V n 140 ) are copies with variations of the same Original Content source file (such as Original Content item 110 ).
- the difference between two different pirated copies may be smaller than the difference between the original and a pirated copy.
- a comparison method applied to Original Content item 110 and content item V n 140 , or between Original Content item 110 and content item V 1 120 may find that there is little or no correlation between the three versions. However, a high level of correlation may be detected between content item V n 140 , and content item V 1 120 . For two versions of a content item to be considered similar the comparison rate between them should be greater than a predefined threshold.
- content item versions V 1 120 and V n 140 may not be directly connected to the Original Content item 110 (note that the solid arrows 170 , 180 , and 190 do not indicate any direct connection between: content item V 1 120 and content item V 2 130 ; nor between Original Content item 110 and content item V 1 120 ), but, since there is a path between them, they may still be identified as the same content item.
- FIG. 1 may be viewed as an undirected graph, i.e., a set of nodes, in this case, content items, connected together by bi-directional edges. Specifically, since the content item V 1 120 is connected to the content item V n 140 indicated by an edge, solid arrow 190 , it is also true that the content item V n 140 is related to the content item V 1 120 . For the purposes of the discussion below, the graph of FIG. 1 will be referred to as a “correlation graph”.
- FIG. 2 is a depiction of an exemplary comparison of video content items using the comparison system of FIG. 1 .
- a timeline 210 indicates that exemplary illustrations of frames of video content which appear on the left side of the figure occur in a video (i.e. the content item) prior to those events which are shown progressing to the right, which appear progressively later in the video, later than those events to their left. It is appreciated that the example provided in FIG. 2 and the following discussion is a discussion of a simplified example, provided in order to highlight operation of an embodiment, which a person of skill in the art would be able to generalize for application in a more complex situation.
- a grid 220 shows video frames, respectively from the earliest to the latest in the video file. Columns in the grid 220 indicate: a first frame; a second frame; and a third frame. Rows in the grid 220 indicate versions of the content as depicted in FIG. 1 .
- the Original Content item 110 is denoted, for convenience sake, as ⁇ circle around (1) ⁇ ; Version V 1 120 is denoted, for convenience sake, as ⁇ circle around (2) ⁇ ; Version V 2 130 is denoted, for convenience sake, as ⁇ circle around (3) ⁇ ; and Version V n 140 is denoted, for convenience sake, as ⁇ circle around (4) ⁇ .
- Each frame depicted in the grid 220 is shown with a number beneath it.
- the number is an indication of the number of occurrences of faces appearing in the frame.
- Face detection is a technique known in the art, and is a subcategory of feature detection, where a number of occurrences of a particular feature which appears in a given video frame is counted. Because, as discussed above, pirated videos are manipulated, different types of manipulations (as will be detailed below) are depicted, as well as the effect of the manipulation on the number occurrences of features.
- An arrow emphasizes the area of the manipulation in the First Frame column of the grid 220 . So, for example, the second frame of Version V 2 130 has a 1 beneath it, because only one face appears in the frame.
- V 1 120 ⁇ circle around (2) ⁇ is shown as having been cropped on the left side of the frame.
- Content version V 2 130 ⁇ circle around (3) ⁇ is shown as having been cropped on the right side of the frame.
- Content version V n 140 ⁇ circle around (4) ⁇ is shown as having been cropped on the bottom of the frame.
- the first frame shows two faces as appearing in the frame; the second frame shows one face as appearing in the frame; and the third frame shows three faces as appearing in the frame.
- V 1 120 ; V 2 130 ; and V n 140 (all from FIG. 1 ) is as follows:
- Table 1 summarizes the number of faces in each frame of each version of the content item (the information of which also appears in FIG. 2 ).
- Table 2 shows two of the rows in Table 1 for version V 1 120 and the Original Content item 110 .
- version V 1 120 is considered to be a copy of the Original Content item 110 .
- a correlation (undirected) graph 230 is thereby created, where an arrow indicates that ⁇ circle around (2) ⁇ is derived from ⁇ circle around (1) ⁇ (i.e., ⁇ circle around (1) ⁇ circle around (2) ⁇ ).
- Table 3 shows two of the rows in Table 1 for Version V 2 130 and the Original Content item 110 :
- version V 2 130 is considered to have been made as a copy of the Original Content item 110 .
- an arrow in the correlation graph 230 indicates that ⁇ circle around (3) ⁇ is derived from ⁇ circle around (1) ⁇ .
- correlation graph 230 shows that ⁇ circle around (4) ⁇ is derived from ⁇ circle around (2) ⁇ . Although no table is provided here, by referring to the Table 1, it can be seen that version V 2 130 has only one cell which matches version V n 140 . Thus, correlation graph 230 does not show any direct correlation between ⁇ circle around (3) ⁇ and ⁇ circle around (4) ⁇ .
- FIGS. 3-5 are a series of block diagrams detailing a method of determining a correlation graph, similar to the correlation graph 230 of FIG. 2 . It is appreciated that the example given in FIGS. 3-5 uses an original content item ⁇ circle around (0) ⁇ 310 , and four additional content items, content ⁇ circle around (1) ⁇ 320 ; content ⁇ circle around (2) ⁇ 330 ; content ⁇ circle around (3) ⁇ 340 ; and content ⁇ circle around (4) ⁇ 350 .
- a match is determined between the original content item ⁇ circle around (0) ⁇ 310 and content ⁇ circle around (1) ⁇ 320 and content ⁇ circle around (2) ⁇ 330 . No match, however, is found, between content item ⁇ circle around (0) ⁇ 310 and content ⁇ circle around (3) ⁇ 340 and content ⁇ circle around (4) ⁇ 350 .
- content ⁇ circle around (3) ⁇ 340 is found to match content ⁇ circle around (2) ⁇ 330 .
- content ⁇ circle around (4) ⁇ 340 is found to match content ⁇ circle around (4) ⁇ 350 .
- a correlation graph 380 resulting from the example of FIGS. 3-5 is depicted in FIG. 6 , graphically summarizing the results of the iterations of comparison depicted in FIGS. 3-5 .
- FIG. 7 is a depiction of a plurality of correlation graphs, such as the correlation graph 380 of FIG. 6 , for inputting into a machine learning system.
- the machine learning system may comprise a neural network, a system implementing a clustering algorithm, a system implementing a na ⁇ ve Bayes classification algorithm, or another appropriate machine learning method as is known in the art.
- a set of correlation graphs used as training data 710 is depicted on the left side of FIG. 7 .
- the training data 710 data-set is input into the machine learning system to train the machine learning system to distinguish between types of correlation graphs which are indicative of non-pirated content and types of correlation graphs which are indicative of pirated content.
- the training data 710 data-set which comprises correlation graphs of known non-pirated related content items, such as correlation graph 720 and correlation graph 725 , as well as correlation graphs of known pirated related content items, such as correlation graph 730 , is input into the machine learning system.
- the machine learning system via machine learning processes known in the art learns to distinguish between the correlation graphs of non-pirated related content items, such as correlation graph 720 and correlation graph 725 , which are typically non-sparse, and the correlation graphs of pirated content, such as correlation graph 730 , which are typically sparse.
- Correlation graphs of pirated content are assumed to be sparse because pirates typically manipulate the video so that automatic detection via simple comparison becomes difficult. Because of the variety of manipulations, including change of color, cropping, rotation/translation, audio mute/swap, there is a lower correlation between the different files compared. By contrast, however, correlation graphs of non-pirated content are assumed to be non-sparse because little manipulation is anticipated in the files. Some level of variety might be introduced in non-pirated content due to variations introduced in legitimate operations, such as trans-coding. Where two content items are compared and have a higher level of similarity, as is assumed to be the case for groups of non-pirated content items, the nodes are graphed closer to one another.
- Each correlation graph input into the machine learning system, whether from the training data 710 data-set or from the unknown clusters 740 is a graph where each node represents the content and each edge represents a similarity factor.
- the correlation graphs 750 , 760 , 770 , 780 , and 790 for the unknown clusters will then be individually fed into the machine learning system and will produce a result indicating if the content items in the group of compared content items yielding the correlation graphs 750 , 760 , 770 , 780 , and 790 are suspected of being groups of pirated content items, such as correlation graphs 780 and 790 , or groups of non-pirated content items, such as correlation graphs 750 , 760 , and 770 .
- FIG. 8 is a block diagram of an exemplary device 800 comprising one or both of the machine learning system and a comparator which performs the comparison of the content items as described above with reference to FIGS. 2-6 .
- the exemplary device 800 is suitable for implementing any of the systems, methods or processes described above.
- the exemplary device 800 comprises one or more processors, such as processor 801 , providing an execution platform for executing machine readable instructions such as software.
- One of the processors 801 may be a special purpose processor operative to perform the method for piracy detection described herein above.
- the system 800 also includes a main memory 803 , such as a Random Access Memory (RAM) 804 , where machine readable instructions may reside during runtime, and a secondary memory 805 .
- the secondary memory 805 includes, for example, a hard disk drive 807 and/or a removable storage drive 808 , representing a floppy diskette drive, a magnetic tape drive, a compact disk drive, a flash drive, etc., or a nonvolatile memory where a copy of the machine readable instructions or software may be stored.
- the secondary memory 805 may also include ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM).
- ROM read only memory
- EPROM erasable, programmable ROM
- EEPROM electrically erasable, programmable ROM
- data representing any one or more of the various content items discussed herein throughout, for example, and without limiting the generality of the foregoing, original content item 110 , additional versions V 1 120 , V 2 130 , and V n 140 of FIG. 1 , and correspondingly, FIG. 2 , as well as original content item ⁇ circle around (0) ⁇ 310 , content ⁇ circle around (1) ⁇ 320 , content ⁇ circle around (2) ⁇ 330 , content ⁇ circle around (3) ⁇ 340 , and content ⁇ circle around (4) ⁇ 350 of FIGS.
- the correlation graphs such as correlation graph 380 of FIG. 6 , the training data 710 , and unknown clusters 740 of FIG. 7 ; or other similar data, may be stored in the main memory 803 and/or the secondary memory 805 .
- the removable storage drive 808 reads from and/or writes to a removable storage unit 809 in a well-known manner.
- a user can interface with the exemplary device 800 via a user interface which includes input devices 811 , such as a touch screen, a keyboard, a mouse, a stylus, and the like in order to provide user input data.
- a display adaptor 815 interfaces with the communication bus 802 and a display 817 and receives display data from the processor 801 and converts the display data into display commands for the display 817 .
- a network interface 319 is provided for communicating with other systems and devices via a network (such as network 155 of FIG. 1 ).
- the network interface 319 typically includes a wireless interface for communicating with wireless devices in the wireless community.
- a wired network interface e.g. an Ethernet interface
- the exemplary device 800 may also comprise other interfaces, including, but not limited to Bluetooth, and HDMI.
- the machine learning system 850 may be among the software and/or specialized hardware executed or controlled by the processor 801 .
- the machine learning system 850 may comprise any appropriate machine learning methods as are known in the art, including, but not limited to a neural network, a clustering algorithm, or a na ⁇ ve Bayes classification algorithm.
- a comparator 860 which may itself comprise either hardware, software, or a combination of both hardware and software, which performs the comparing method described above with reference to FIGS. 2-5 , and which outputs the correlation graphs such as correlation graph 380 of FIG. 6 , is also typically executed or controlled by the processor 801 .
- the exemplary device 800 shown in FIG. 8 is provided as an example of a possible platform that may be used, and other types of platforms may be used as is known in the art.
- One or more of the steps described above may be implemented as instructions embedded on a computer readable medium and executed on the exemplary device 800 .
- the steps may be embodied by a computer program, which may exist in a variety of forms both active and inactive. For example, they may exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats for performing some of the steps.
- any of the above may be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form.
- suitable computer readable storage devices include conventional computer system RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes.
- Examples of computer readable signals, whether modulated using a carrier or not, are signals that a computer system hosting or running a computer program may be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of the programs on a CD ROM or via Internet download. In a sense, the Internet itself, as an abstract entity, is a computer readable medium. The same is true of computer networks in general. It is therefore to be understood that those functions enumerated above may be performed by any electronic device capable of executing the above-described functions.
- FIG. 9 is a simplified flow chart diagrams of a method for an embodiment of the system FIG. 1 .
- a plurality of target content items are stored in a storage device associated with a memory.
- At least one content items of the plurality of target content items are compared with the other content items of the plurality of target content items (step 920 ). At least on the basis of comparing the at least one content item of the plurality of target content items with the other content items of the plurality of target content items, a correlation graph indicating a level of correlation between each one content item of the plurality of target content items and the other content items of the plurality of target content items is developed (step 930 ).
- the correlation graph is input into a machine learning system.
- a decision is output from the machine learning system, the decision indicating, on the basis of the level of correlation shown in the correlation graph, if the content items represented in the correlation graph are pirated content items or are not pirated content items.
- software components of the present invention may, if desired, be implemented in ROM (read only memory) or non-volatile memory form.
- the software components may, generally, be implemented in hardware, if desired, using conventional techniques.
- the software components may be instantiated, for example: as a computer program product or on a tangible medium. In some cases, it may be possible to instantiate the software components as a signal interpretable by an appropriate computer, although such an instantiation may be excluded in certain embodiments of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Technology Law (AREA)
- Multimedia (AREA)
- Bioethics (AREA)
- Mathematical Optimization (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
Abstract
Description
- The present disclosure generally relates to methods and systems for comparing versions of content files.
- Copyright holders seek to identify copyright violations which occur when copyrighted content, such as a copyrighted video, is pirated. Such content, to which access has been made available in violation of copyright, may be referred to as hacked video, hacked content, rogue content, pirated content, or other similar terms.
- It is often the case that pirated content will be manipulated by pirates in an attempt to frustrate automatic detection systems, so that automatic detection via simple comparison becomes difficult. Such manipulations may include, for example, but not be limited to: change of color, cropping, rotation/translation, audio mute/swap, video format transcoding, etc. Sometimes these manipulations occur as incidental byproducts of conversion from a source to a digital replication of the source.
- The present disclosure will be understood and appreciated more fully from the following detailed description, taken in conjunction with the drawings in which:
-
FIG. 1 is a simplified block diagram of a comparison of an original content item with similar content items, as performed by a comparison system which is constructed and operative in accordance with an embodiment of the present invention; -
FIG. 2 is a depiction of an exemplary comparison of content items using the comparison ofFIG. 1 ; -
FIGS. 3-5 are a series of block diagrams detailing a method of determining a correlation graph, similar to a correlation graph depicted inFIG. 2 ; -
FIG. 6 is a correlation graph resulting from the example ofFIGS. 3-5 ; -
FIG. 7 is a depiction of a plurality of correlation graphs, such as the correlation graph ofFIG. 6 , for inputting into a machine learning system; -
FIG. 8 is a block diagram of an exemplary device comprising one or both of the machine learning system and a comparator which performs the comparison of the content items as described above with reference toFIGS. 2-6 ; and -
FIG. 9 is a simplified flow chart diagram of a method for an embodiment of the systemFIG. 1 . - In one embodiment, a system, apparatus and a method is described, the system, apparatus and a method including, a storage device and a memory operative to store a plurality of target content items, a comparator operative to compare each one content item of the plurality of target content items with the other content items of the plurality of target content items, and, at least on the basis of comparing each one content item of the plurality of target content items with the other content items of the plurality of target content items, to develop a correlation graph indicating a level of correlation between each one content item of the plurality of target content items and the other content items of the plurality of target content items, and a machine learning system operative to receive, as an input, the correlation graph and to output a decision, on the basis of the level of correlation shown in the correlation graph, which indicates if the content items represented in the correlation graph are pirated content items or are not pirated content items. Related system, apparatuses and methods are also described.
- Reference is now made to
FIG. 1 , which is a simplified block diagram of a comparison of an original content item with similar content items, as performed by a comparison system which is constructed and operative in accordance with an embodiment of the present invention. Afirst content item 110, designated inFIG. 1 as an “Original Content” item, is depicted. A number ofadditional versions FIG. 1 , respectively, as V1, V2, Vn. The additional versions V1 120,V 2 130, andV n 140 of theOriginal Content item 110 are pirated copies of theOriginal Content item 110. Dottedarrows 160 indicate that the additional versions V1 120,V 2 130, andV n 140 are somehow related to theOriginal Content item 110. The path of the relationship may be direct, as indicated by thesolid arrow 170 betweenadditional version V 2 130 and theOriginal Content item 110. Alternatively, the relationship between theOriginal Content item 110 and one of the additional versions V1 120, andV n 140 may be indirect, as indicated by solid arrows, indicating a direct relationship betweenV 2 130 and Vn 140 (arrow 180), and a second direct relationship betweenV n 140 and V1 120 (arrow 190). Accordingly, as depicted inFIG. 1 , the relationship betweenvarious content items - In cases where a direct relationship is detected (i.e., Original Content item 110-
V 2 130; V2 130-V n 140; and Vn 140-V1 120), a threshold has been exceeded when the two versions are compared by the comparison system, as will be explained. - Persons who attempt to gain unauthorized access to copyrighted video (e.g., Original Content item 110) are sometimes referred to as “Hackers,” “Rogues,” or “Pirates”. Such content, to which a hacker, rogue, or pirate has gained unauthorized or illegal access may be referred to as hacked content, rogue content, pirated content, or other similar terms. Pirates may attempt to distribute the content over rogue content distribution networks, peer-to-peer file sharing methods, and so forth. It is often the case that pirated copies of the copyrighted content have been somehow manipulated using various methods, which are typically known methods, in order to make automated detection of pirated copies difficult. Such manipulations include, for example: change of color, cropping, rotation/translation, audio mute/swap. Methods for comparison of versions of files of content are discussed below, particularly with reference to
FIGS. 2-5 . Content may include video content, audio content, or other formats of content which are suitable for consumption. - It is the opinion of the inventors that there is little value in pirating content that is freely available at one location on the Internet. For example, if a copy of content item is freely available for distribution, for example, if the content item has been legitimately uploaded to a video sharing service such as YouTube™, then copies which may exist elsewhere on the Internet are also legitimate (i.e., not pirated) copies of the content item. Hence, there is little reason for someone who is making a legitimate copy of the freely available content item, to manipulate the file in an attempt to disguise the file's source. By contrast, a content item which is uploaded in violation of intellectual property rights of a content owner may be subject to manipulations of the sort mentioned above. A level of correspondence between two files which have been differently manipulated, may be used to determine a level of confidence that the various versions of the content item (such as V1 120,
V 2 130, and Vn 140) are copies with variations of the same Original Content source file (such as Original Content item 110). - The difference between two different pirated copies may be smaller than the difference between the original and a pirated copy. By way of example, a comparison method applied to
Original Content item 110 andcontent item V n 140, or betweenOriginal Content item 110 and content item V1 120 may find that there is little or no correlation between the three versions. However, a high level of correlation may be detected betweencontent item V n 140, and content item V1 120. For two versions of a content item to be considered similar the comparison rate between them should be greater than a predefined threshold. Thus, content item versions V1 120 and Vn 140 may not be directly connected to the Original Content item 110 (note that thesolid arrows content item V 2 130; nor betweenOriginal Content item 110 and content item V1 120), but, since there is a path between them, they may still be identified as the same content item. -
FIG. 1 may be viewed as an undirected graph, i.e., a set of nodes, in this case, content items, connected together by bi-directional edges. Specifically, since the content item V1 120 is connected to thecontent item V n 140 indicated by an edge,solid arrow 190, it is also true that thecontent item V n 140 is related to the content item V1 120. For the purposes of the discussion below, the graph ofFIG. 1 will be referred to as a “correlation graph”. - Prior to discussing the method utilized to construct the graph of
FIG. 1 , reference is made toFIG. 2 , which is a depiction of an exemplary comparison of video content items using the comparison system ofFIG. 1 . Atimeline 210 indicates that exemplary illustrations of frames of video content which appear on the left side of the figure occur in a video (i.e. the content item) prior to those events which are shown progressing to the right, which appear progressively later in the video, later than those events to their left. It is appreciated that the example provided inFIG. 2 and the following discussion is a discussion of a simplified example, provided in order to highlight operation of an embodiment, which a person of skill in the art would be able to generalize for application in a more complex situation. - A
grid 220 shows video frames, respectively from the earliest to the latest in the video file. Columns in thegrid 220 indicate: a first frame; a second frame; and a third frame. Rows in thegrid 220 indicate versions of the content as depicted inFIG. 1 . In order to simplify the upcoming figures, theOriginal Content item 110 is denoted, for convenience sake, as {circle around (1)}; Version V1 120 is denoted, for convenience sake, as {circle around (2)}; Version V2 130 is denoted, for convenience sake, as {circle around (3)}; and Version Vn 140 is denoted, for convenience sake, as {circle around (4)}. - Each frame depicted in the
grid 220 is shown with a number beneath it. The number is an indication of the number of occurrences of faces appearing in the frame. Face detection is a technique known in the art, and is a subcategory of feature detection, where a number of occurrences of a particular feature which appears in a given video frame is counted. Because, as discussed above, pirated videos are manipulated, different types of manipulations (as will be detailed below) are depicted, as well as the effect of the manipulation on the number occurrences of features. An arrow emphasizes the area of the manipulation in the First Frame column of thegrid 220. So, for example, the second frame of Version V2 130 has a 1 beneath it, because only one face appears in the frame. - Content version V1 120 {circle around (2)} is shown as having been cropped on the left side of the frame. Content version V2 130 {circle around (3)} is shown as having been cropped on the right side of the frame. Content version Vn 140 {circle around (4)} is shown as having been cropped on the bottom of the frame. Turning now to the number of faces in the Original Content item 110 {circle around (1)}, the first frame shows two faces as appearing in the frame; the second frame shows one face as appearing in the frame; and the third frame shows three faces as appearing in the frame. The effects of the cropping on the frames in the different versions of the content. V1 120;
V 2 130; and Vn 140 (all fromFIG. 1 ) is as follows: -
- Version V1 120 {circle around (2)} has been cropped on the left side of the frame. Thus, in the first frame, the face on the left side of the
frame 222 does not appear. Only one face is counted, instead of the two faces which appear in the original frame. However, the left side cropping of the frames in version V1 120 has not affected the faces in the remaining second and third frames. - Version V2 130 {circle around (3)} has been cropped on the right side of the frame. Thus, in the third frame, the two faces which appear on the right side of the
original frame version V 2 130. However, the right side cropping of the frames inversion V 2 130 has not affected the faces in the remaining first and second frames. - Version Vn 140 {circle around (4)} has been cropped on the bottom of the frame. Thus, in the first frame, the face in the lower left corner of the
frame 222 is mostly obscured by the cropping. Likewise, in the third frame, the face in the lower right of theframe 224B is mostly obscured by the cropping. The second face in the upper right of theframe 224A however, is not affected by the cropping.
- Version V1 120 {circle around (2)} has been cropped on the left side of the frame. Thus, in the first frame, the face on the left side of the
- Table 1 below summarizes the number of faces in each frame of each version of the content item (the information of which also appears in
FIG. 2 ). -
TABLE 1 First Frame Second Frame Third Frame Original Content item 2 1 3 V 11 1 3 V 22 1 1 V n1 1 2 - Each of the different versions of the content is first compared to the original version of the content. Table 2 shows two of the rows in Table 1 for version V1 120 and the
Original Content item 110. -
TABLE 2 First Frame Second Frame Third Frame Original Content item 2 1 3 V 11 1 3 Does not match Match Match
Thus, version V1 120 is considered to be a copy of theOriginal Content item 110. A correlation (undirected)graph 230 is thereby created, where an arrow indicates that {circle around (2)} is derived from {circle around (1)} (i.e., {circle around (1)}→{circle around (2)}). - Likewise, Table 3 shows two of the rows in Table 1 for
Version V 2 130 and the Original Content item 110: -
TABLE 3 First Frame Second Frame Third Frame Original Content item 2 1 3 V 22 1 1 Match Match Does not match
Thus,version V 2 130 is considered to have been made as a copy of theOriginal Content item 110. As such, in thecorrelation graph 230, whereversion V 2 130 is denoted, for convenience sake, as {circle around (3)}, an arrow in the correlation graph 230 ({circle around (1)}→{circle around (3)}) indicates that {circle around (3)} is derived from {circle around (1)}. - However, as indicated in Table 4, below, comparing the row of Table 1 for
version V n 140 to the row of Table 1 forOriginal Content item 110 shows the following result: -
TABLE 4 First Frame Second Frame Third Frame Original Content item 2 1 3 V n1 1 2 Does not match Match Does not match
Accordingly, since there is only one matching cell betweenversion V n 140 andOriginal Content item 110, {circle around (4)} is not shown in thecorrelation graph 230 as (directly) derived from {circle around (1)}. - Comparing version V1 120 to
version V n 140, gives the following (Table 5): -
TABLE 5 First Frame Second Frame Third Frame V 1 1 1 3 V n1 1 2 Match Match Does not match
Accordingly,correlation graph 230 shows that {circle around (4)} is derived from {circle around (2)}. Although no table is provided here, by referring to the Table 1, it can be seen thatversion V 2 130 has only one cell which matchesversion V n 140. Thus,correlation graph 230 does not show any direct correlation between {circle around (3)} and {circle around (4)}. - Reference is now made to
FIGS. 3-5 , which are a series of block diagrams detailing a method of determining a correlation graph, similar to thecorrelation graph 230 ofFIG. 2 . It is appreciated that the example given inFIGS. 3-5 uses an original content item {circle around (0)} 310, and four additional content items, content {circle around (1)} 320; content {circle around (2)} 330; content {circle around (3)} 340; and content {circle around (4)} 350. - Using an appropriate feature matching technique, such as the face matching technique used in the example of
FIG. 2 , a match is determined between the original content item {circle around (0)} 310 and content {circle around (1)} 320 and content {circle around (2)} 330. No match, however, is found, between content item {circle around (0)} 310 and content {circle around (3)} 340 and content {circle around (4)} 350. In a second iteration of comparison, performed after the comparisons described above, depicted inFIG. 4 , similar to the comparison of version V1 120 toversion V n 140 inFIG. 2 , content {circle around (3)} 340 is found to match content {circle around (2)} 330. In a third iteration of comparison, depicted inFIG. 5 , content {circle around (4)} 340 is found to match content {circle around (4)} 350. - A
correlation graph 380 resulting from the example ofFIGS. 3-5 is depicted inFIG. 6 , graphically summarizing the results of the iterations of comparison depicted inFIGS. 3-5 . - Reference is now made to
FIG. 7 , which is a depiction of a plurality of correlation graphs, such as thecorrelation graph 380 ofFIG. 6 , for inputting into a machine learning system. The machine learning system may comprise a neural network, a system implementing a clustering algorithm, a system implementing a naïve Bayes classification algorithm, or another appropriate machine learning method as is known in the art. A set of correlation graphs used astraining data 710 is depicted on the left side ofFIG. 7 . Thetraining data 710 data-set is input into the machine learning system to train the machine learning system to distinguish between types of correlation graphs which are indicative of non-pirated content and types of correlation graphs which are indicative of pirated content. Thetraining data 710 data-set, which comprises correlation graphs of known non-pirated related content items, such ascorrelation graph 720 andcorrelation graph 725, as well as correlation graphs of known pirated related content items, such ascorrelation graph 730, is input into the machine learning system. The machine learning system, via machine learning processes known in the art learns to distinguish between the correlation graphs of non-pirated related content items, such ascorrelation graph 720 andcorrelation graph 725, which are typically non-sparse, and the correlation graphs of pirated content, such ascorrelation graph 730, which are typically sparse. - Correlation graphs of pirated content are assumed to be sparse because pirates typically manipulate the video so that automatic detection via simple comparison becomes difficult. Because of the variety of manipulations, including change of color, cropping, rotation/translation, audio mute/swap, there is a lower correlation between the different files compared. By contrast, however, correlation graphs of non-pirated content are assumed to be non-sparse because little manipulation is anticipated in the files. Some level of variety might be introduced in non-pirated content due to variations introduced in legitimate operations, such as trans-coding. Where two content items are compared and have a higher level of similarity, as is assumed to be the case for groups of non-pirated content items, the nodes are graphed closer to one another. On the other hand, two content items which are compared and have a lower level of similarity, as is assumed to be the case for groups of pirated content items, will result in nodes graphed further from one another. It is also understood in related fields of mathematics (such as, but not limited to directed graphs), the distinction between sparse and non-sparse correlation graphs may appear vague, and may depend on the context. However, as is known in the art, machine learning systems have been found to be successful in dealing with these vague distinctions.
- Once the machine learning system has been trained using the machine learning techniques described above, groups of suspected videos may then be obtained and correlation graphs of
unknown clusters 740 may be obtained using the methods described above. Unknown clusters such ascorrelation graphs training data 710 data-set or from theunknown clusters 740 is a graph where each node represents the content and each edge represents a similarity factor. - The
correlation graphs correlation graphs correlation graphs correlation graphs - Reference is now made to
FIG. 8 , which is a block diagram of anexemplary device 800 comprising one or both of the machine learning system and a comparator which performs the comparison of the content items as described above with reference toFIGS. 2-6 . Theexemplary device 800 is suitable for implementing any of the systems, methods or processes described above. Theexemplary device 800 comprises one or more processors, such asprocessor 801, providing an execution platform for executing machine readable instructions such as software. One of theprocessors 801 may be a special purpose processor operative to perform the method for piracy detection described herein above. - Commands and data from the
processor 801 are communicated over acommunication bus 802. Thesystem 800 also includes amain memory 803, such as a Random Access Memory (RAM) 804, where machine readable instructions may reside during runtime, and asecondary memory 805. Thesecondary memory 805 includes, for example, ahard disk drive 807 and/or aremovable storage drive 808, representing a floppy diskette drive, a magnetic tape drive, a compact disk drive, a flash drive, etc., or a nonvolatile memory where a copy of the machine readable instructions or software may be stored. Thesecondary memory 805 may also include ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM). In addition to software, data representing any one or more of the various content items discussed herein throughout, for example, and without limiting the generality of the foregoing,original content item 110, additional versions V1 120,V 2 130, andV n 140 ofFIG. 1 , and correspondingly,FIG. 2 , as well as original content item {circle around (0)} 310, content {circle around (1)} 320, content {circle around (2)} 330, content {circle around (3)} 340, and content {circle around (4)} 350 ofFIGS. 3-6 ; the correlation graphs such ascorrelation graph 380 ofFIG. 6 , thetraining data 710, andunknown clusters 740 ofFIG. 7 ; or other similar data, may be stored in themain memory 803 and/or thesecondary memory 805. Theremovable storage drive 808 reads from and/or writes to aremovable storage unit 809 in a well-known manner. - A user can interface with the
exemplary device 800 via a user interface which includesinput devices 811, such as a touch screen, a keyboard, a mouse, a stylus, and the like in order to provide user input data. Adisplay adaptor 815 interfaces with thecommunication bus 802 and adisplay 817 and receives display data from theprocessor 801 and converts the display data into display commands for thedisplay 817. - A network interface 319 is provided for communicating with other systems and devices via a network (such as network 155 of
FIG. 1 ). The network interface 319 typically includes a wireless interface for communicating with wireless devices in the wireless community. A wired network interface (e.g. an Ethernet interface) may be present as well. Theexemplary device 800 may also comprise other interfaces, including, but not limited to Bluetooth, and HDMI. - The
machine learning system 850, the use of which is described above with reference toFIG. 7 , may be among the software and/or specialized hardware executed or controlled by theprocessor 801. As noted above, themachine learning system 850 may comprise any appropriate machine learning methods as are known in the art, including, but not limited to a neural network, a clustering algorithm, or a naïve Bayes classification algorithm. Acomparator 860, which may itself comprise either hardware, software, or a combination of both hardware and software, which performs the comparing method described above with reference toFIGS. 2-5 , and which outputs the correlation graphs such ascorrelation graph 380 ofFIG. 6 , is also typically executed or controlled by theprocessor 801. - It will be apparent to one of ordinary skill in the art that one or more of the components of the
exemplary device 800 may not be included and/or other components may be added as is known in the art. Theexemplary device 800 shown inFIG. 8 is provided as an example of a possible platform that may be used, and other types of platforms may be used as is known in the art. One or more of the steps described above may be implemented as instructions embedded on a computer readable medium and executed on theexemplary device 800. The steps may be embodied by a computer program, which may exist in a variety of forms both active and inactive. For example, they may exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats for performing some of the steps. Any of the above may be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form. Examples of suitable computer readable storage devices include conventional computer system RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes. Examples of computer readable signals, whether modulated using a carrier or not, are signals that a computer system hosting or running a computer program may be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of the programs on a CD ROM or via Internet download. In a sense, the Internet itself, as an abstract entity, is a computer readable medium. The same is true of computer networks in general. It is therefore to be understood that those functions enumerated above may be performed by any electronic device capable of executing the above-described functions. - Reference is now made to
FIG. 9 , which is a simplified flow chart diagrams of a method for an embodiment of the systemFIG. 1 . Atstep 910, a plurality of target content items are stored in a storage device associated with a memory. - At least one content items of the plurality of target content items are compared with the other content items of the plurality of target content items (step 920). At least on the basis of comparing the at least one content item of the plurality of target content items with the other content items of the plurality of target content items, a correlation graph indicating a level of correlation between each one content item of the plurality of target content items and the other content items of the plurality of target content items is developed (step 930).
- At
step 940 the correlation graph is input into a machine learning system. At step 950 a decision is output from the machine learning system, the decision indicating, on the basis of the level of correlation shown in the correlation graph, if the content items represented in the correlation graph are pirated content items or are not pirated content items. - It is appreciated that software components of the present invention may, if desired, be implemented in ROM (read only memory) or non-volatile memory form. The software components may, generally, be implemented in hardware, if desired, using conventional techniques. It is further appreciated that the software components may be instantiated, for example: as a computer program product or on a tangible medium. In some cases, it may be possible to instantiate the software components as a signal interpretable by an appropriate computer, although such an instantiation may be excluded in certain embodiments of the present invention.
- It is appreciated that various features of the invention which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable subcombination.
- It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described hereinabove. Rather the scope of the invention is defined by the appended claims and equivalents thereof:
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/444,351 US11176452B2 (en) | 2017-02-28 | 2017-02-28 | Method and system for piracy detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/444,351 US11176452B2 (en) | 2017-02-28 | 2017-02-28 | Method and system for piracy detection |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180247054A1 true US20180247054A1 (en) | 2018-08-30 |
US11176452B2 US11176452B2 (en) | 2021-11-16 |
Family
ID=63246337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/444,351 Active 2039-12-31 US11176452B2 (en) | 2017-02-28 | 2017-02-28 | Method and system for piracy detection |
Country Status (1)
Country | Link |
---|---|
US (1) | US11176452B2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021010999A1 (en) * | 2019-07-17 | 2021-01-21 | Nagrastar, Llc | Systems and methods for piracy detection and prevention |
US10936700B2 (en) * | 2018-10-03 | 2021-03-02 | Matthew John Tooley | Method and system for detecting pirated video network traffic |
US20210117480A1 (en) * | 2019-10-18 | 2021-04-22 | Nbcuniversal Media, Llc | Artificial intelligence-assisted content source identification |
US11778022B2 (en) * | 2019-08-14 | 2023-10-03 | Salesforce, Inc. | Dynamically generated context pane within a group-based communication interface |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2404296A (en) | 2003-07-23 | 2005-01-26 | Sony Uk Ltd | Data content identification using watermarks as distinct codes |
KR101222351B1 (en) * | 2004-01-06 | 2013-01-14 | 톰슨 라이센싱 | Improved techniques for detecting, analyzing, and using visible authentication patterns |
US20060277609A1 (en) * | 2005-06-07 | 2006-12-07 | Marc Brandon | Method and apparatus for tracking pirated media |
US7827123B1 (en) * | 2007-08-16 | 2010-11-02 | Google Inc. | Graph based sampling |
JP5676184B2 (en) | 2009-09-14 | 2015-02-25 | トムソン ライセンシングThomson Licensing | Method and registration apparatus for temporally aligning image series |
EP2437498A1 (en) * | 2010-09-30 | 2012-04-04 | British Telecommunications Public Limited Company | Digital video fingerprinting |
US8954358B1 (en) | 2011-11-03 | 2015-02-10 | Google Inc. | Cluster-based video classification |
US9047376B2 (en) * | 2012-05-01 | 2015-06-02 | Hulu, LLC | Augmenting video with facial recognition |
JP5992276B2 (en) | 2012-09-20 | 2016-09-14 | 株式会社東芝 | Person recognition apparatus and method |
US9202178B2 (en) | 2014-03-11 | 2015-12-01 | Sas Institute Inc. | Computerized cluster analysis framework for decorrelated cluster identification in datasets |
-
2017
- 2017-02-28 US US15/444,351 patent/US11176452B2/en active Active
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10936700B2 (en) * | 2018-10-03 | 2021-03-02 | Matthew John Tooley | Method and system for detecting pirated video network traffic |
WO2021010999A1 (en) * | 2019-07-17 | 2021-01-21 | Nagrastar, Llc | Systems and methods for piracy detection and prevention |
US20220358762A1 (en) * | 2019-07-17 | 2022-11-10 | Nagrastar, Llc | Systems and methods for piracy detection and prevention |
US11778022B2 (en) * | 2019-08-14 | 2023-10-03 | Salesforce, Inc. | Dynamically generated context pane within a group-based communication interface |
US20210117480A1 (en) * | 2019-10-18 | 2021-04-22 | Nbcuniversal Media, Llc | Artificial intelligence-assisted content source identification |
US12080047B2 (en) * | 2019-10-18 | 2024-09-03 | Nbcuniversal Media, Llc | Artificial intelligence-assisted content source identification |
Also Published As
Publication number | Publication date |
---|---|
US11176452B2 (en) | 2021-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11176452B2 (en) | Method and system for piracy detection | |
US9251549B2 (en) | Watermark extractor enhancements based on payload ranking | |
EP3126958B1 (en) | Systems and methods for detecting copied computer code using fingerprints | |
KR101609088B1 (en) | Media identification system with fingerprint database balanced according to search loads | |
US8875303B2 (en) | Detecting pirated applications | |
US9418297B2 (en) | Detecting video copies | |
CN114491566B (en) | Fuzzy test method and device based on code similarity and storage medium | |
CN111191591B (en) | Watermark detection and video processing method and related equipment | |
US10600190B2 (en) | Object detection and tracking method and system for a video | |
US10699128B2 (en) | Method and system for comparing content | |
JP4164494B2 (en) | Digital data sequence identification | |
JP2005227756A (en) | Desynchronized fingerprinting method and system, for digital multimedia data | |
KR102233175B1 (en) | Method for determining signature actor and for identifying image based on probability of appearance of signature actor and apparatus for the same | |
US20180005080A1 (en) | Computer-readable storage medium storing image processing program and image processing apparatus | |
CN113853594A (en) | Granular access control for secure memory | |
CN111539929A (en) | Copyright detection method and device and electronic equipment | |
KR101373176B1 (en) | Copy video data detection method and apparatus, storage medium | |
KR100859215B1 (en) | Devices, systems, and methods for protecting content using fingerprinting and real-time evidence collection | |
US20220164417A1 (en) | Method of evaluating robustness of artificial neural network watermarking against model stealing attacks | |
KR102308477B1 (en) | Method for Generating Information of Malware Which Describes the Attack Charateristics of the Malware | |
CN113378118A (en) | Method, apparatus, electronic device, and computer storage medium for processing image data | |
JP7075362B2 (en) | Judgment device, judgment method and judgment program | |
KR100823729B1 (en) | Method and system for preventing the dissemination of image on network using image identification information | |
US20200394383A1 (en) | Electronic apparatus for recognizing multimedia signal and operating method of the same | |
CN116415255A (en) | System vulnerability detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PORAT, URI;GLAZNER, YOAV;STERN, AMITAY;REEL/FRAME:041389/0504 Effective date: 20170228 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PRE-INTERVIEW COMMUNICATION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction |