Video Production System With Social Media Content Item Modification Feature

Hundemer; Hank J.

Patent Application Summary

U.S. patent application number 15/211165 was filed with the patent office on 2017-01-19 for video production system with social media content item modification feature. The applicant listed for this patent is Tribune Broadcasting Company, LLC. Invention is credited to Hank J. Hundemer.

Application Number20170019694 15/211165
Document ID /
Family ID57775229
Filed Date2017-01-19

United States Patent Application 20170019694
Kind Code A1
Hundemer; Hank J. January 19, 2017

VIDEO PRODUCTION SYSTEM WITH SOCIAL MEDIA CONTENT ITEM MODIFICATION FEATURE

Abstract

In one aspect, an example method includes (i) selecting, by a computing system, a social media (SM) content item; (ii) identifying, by the computing system, an element of the selected SM content item based on the element being associated with a particular characteristic; (iii) modifying, by the computing system, the selected SM content item by modifying the identified element of the selected SM content item; and (iv) generating, by the computing system, video content that includes the modified SM content item.


Inventors: Hundemer; Hank J.; (Bellevue, KY)
Applicant:
Name City State Country Type

Tribune Broadcasting Company, LLC

Chicago

IL

US
Family ID: 57775229
Appl. No.: 15/211165
Filed: July 15, 2016

Related U.S. Patent Documents

Application Number Filing Date Patent Number
62242593 Oct 16, 2015
62194171 Jul 17, 2015

Current U.S. Class: 1/1
Current CPC Class: G06F 3/04842 20130101; G06F 40/186 20200101; H04L 67/306 20130101; H04N 21/25841 20130101; H04N 21/6175 20130101; H04N 21/234 20130101; H04N 21/47214 20130101; H04N 21/23418 20130101; G11B 27/031 20130101; H04N 21/4334 20130101; H04N 21/8126 20130101; H04N 21/6125 20130101; H04L 51/32 20130101; H04L 65/602 20130101; H04L 67/10 20130101; G06F 3/0481 20130101; H04L 67/18 20130101; H04N 21/25875 20130101; H04N 21/235 20130101; H04N 21/8133 20130101; H04L 65/4076 20130101; H04N 21/2665 20130101; H04N 21/4316 20130101; H04N 21/854 20130101; H04L 65/604 20130101; H04N 21/84 20130101; H04L 65/607 20130101; H04N 21/262 20130101; H04L 67/02 20130101; H04N 21/23424 20130101; G06F 3/0486 20130101; H04N 21/458 20130101; G11B 27/00 20130101; H04N 21/435 20130101; G06F 3/0482 20130101; H04L 51/10 20130101; H04L 67/42 20130101; H04N 21/482 20130101
International Class: H04N 21/234 20060101 H04N021/234; H04N 21/2665 20060101 H04N021/2665

Claims



1. A method comprising: selecting, by a computing system, a social media (SM) content item; identifying, by the computing system, an element of the selected SM content item based on the element being associated with a particular characteristic; modifying, by the computing system, the selected SM content item by modifying the identified element of the selected SM content item; and generating, by the computing system, video content that includes the modified SM content item.

2. The method of claim 1, wherein the element is a text element or an image element.

3. The method of claim 1, wherein the element is a first element, the method further comprising: accessing, by the computing system, a set of elements, wherein the first element being associated with the particular characteristic comprises the first element having a threshold extent of similarity with a second element of the accessed set of elements.

4. The method of claim 1, wherein the element being associated with the particular characteristic comprises the element being stored in a predefined type of field.

5. The method of claim 1, wherein the element being associated with the particular characteristic comprises the element having a predefined characteristic.

6. The method of claim 1, wherein modifying the identified element comprises removing the identified element from the selected SM content item.

7. The method of claim 1, wherein modifying the identified element comprises redacting the identified element of the selected SM content item.

8. The method of claim 1, wherein modifying the identified element comprises replacing the identified element of the selected SM content item with a replacement element.

9. The method of claim 1, wherein generating video content that includes the modified SM content item comprises executing a digital video-effect (DVE), wherein the computing system is a first computing system, the method further comprising: transmitting, by the first computing system, to a second computing system, the generated video content for presentation of the generated video content on the second computing system.

10. A non-transitory computer-readable medium having stored thereon program instructions that upon execution by a processor, cause performance of a set of acts comprising: selecting, by a computing system, a social media (SM) content item; identifying, by the computing system, an element of the selected SM content item based on the element being associated with a particular characteristic; modifying, by the computing system, the selected SM content item by modifying the identified element of the selected SM content item; and generating, by the computing system, video content that includes the modified SM content item.

11. The non-transitory computer-readable medium of claim 10, wherein the element is a text element or an image element.

12. The non-transitory computer-readable medium of claim 10, wherein the element is a first element, the set of acts further comprising: accessing, by the computing system, a set of elements, wherein the first element being associated with the particular characteristic comprises the first element having a threshold extent of similarity with a second element of the accessed set of elements.

13. The non-transitory computer-readable medium of claim 10, wherein the element being associated with the particular characteristic comprises the element being stored in a predefined type of field.

14. The non-transitory computer-readable medium of claim 10, wherein the element being associated with the particular characteristic comprises the element having a predefined characteristic.

15. The non-transitory computer-readable medium of claim 10, wherein modifying the identified element comprises removing the identified element from the selected SM content item.

16. The non-transitory computer-readable medium of claim 10, wherein modifying the identified element comprises redacting the identified element of the selected SM content item.

17. The non-transitory computer-readable medium of claim 10, wherein modifying the identified element comprises replacing the identified element of the selected SM content item with a replacement element.

18. The non-transitory computer-readable medium of claim 10, wherein generating video content that includes the modified SM content item comprises executing a digital video-effect (DVE), wherein the computing system is a first computing system, the set of acts further comprising: transmitting, by the first computing system, to a second computing system, the generated video content for presentation of the generated video content on the second computing system.

19. A computing system configured for performing a set of acts comprising: selecting, by the computing system, a social media (SM) content item; identifying, by the computing system, an element of the selected SM content item based on the element being associated with a particular characteristic; modifying, by the computing system, the selected SM content item by modifying the identified element of the selected SM content item; and generating, by the computing system, video content that includes the modified SM content item.

20. The computing system of claim 19, wherein the element is a text element or an image element.
Description



RELATED DISCLOSURES

[0001] This disclosure claims priority to (i) U.S. Provisional Patent Application No. 62/194,171, titled "Video Production System with Social Media Features," filed on Jul. 17, 2015, and (ii) U.S. Provisional Patent Application No. 62/242,593, titled "Video Production System with Content-Related Features," filed on Oct. 16, 2015, both of which are hereby incorporated by reference in their entirety.

USAGE AND TERMINOLOGY

[0002] In this disclosure, unless otherwise specified and/or unless the particular context clearly dictates otherwise, the terms "a" or "an" mean at least one, and the term "the" means the at least one.

SUMMARY

[0003] In one aspect, an example method is disclosed. The method includes (i) selecting, by a computing system, a social media (SM) content item; (ii) identifying, by the computing system, an element of the selected SM content item based on the element being associated with a particular characteristic; (iii) modifying, by the computing system, the selected SM content item by modifying the identified element of the selected SM content item; and (iv) generating, by the computing system, video content that includes the modified SM content item.

[0004] In another aspect, an example non-transitory computer-readable medium is disclosed. The computer-readable medium has stored thereon program instructions that upon execution by a processor, cause performance of a set of acts including (i) selecting, by a computing system, a social media (SM) content item; (ii) identifying, by the computing system, an element of the selected SM content item based on the element being associated with a particular characteristic; (iii) modifying, by the computing system, the selected SM content item by modifying the identified element of the selected SM content item; and (iv) generating, by the computing system, video content that includes the modified SM content item.

[0005] In another aspect, an example computing system is disclosed. The computing system is configured for performing a set of acts including (i) selecting, by the computing system, a social media (SM) content item; (ii) identifying, by the computing system, an element of the selected SM content item based on the element being associated with a particular characteristic; (iii) modifying, by the computing system, the selected SM content item by modifying the identified element of the selected SM content item; and (iv) generating, by the computing system, video content that includes the modified SM content item.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] FIG. 1 is a simplified block diagram of an example computing device.

[0007] FIG. 2 is a simplified block diagram of an example video system.

[0008] FIG. 3 is a simplified block diagram of an example video production system.

[0009] FIG. 4A is a simplified diagram of an example frame of video content, without content overlaid thereon.

[0010] FIG. 4B is a simplified diagram of an example frame of video content, with content overlaid thereon.

[0011] FIG. 5 is a simplified block diagram of an example program schedule.

[0012] FIG. 6 is a flow chart of an example method.

DETAILED DESCRIPTION

I. Overview

[0013] A video-production system (VPS) can generate video content that can serve as or be part of a video program (e.g., a news program). The VPS can then transmit the video content to a video-broadcast system (VBS), which in turn can transmit the video content to an end-user device for presentation of the video content to an end-user.

[0014] The VPS can include various components to facilitate generating video content. For example, the VPS can include a video source, a digital video-effect (DVE) system, a scheduling system, and a sequencing system. The video source can generate video content, and can transmit the video content to the DVE system. The DVE system can use the video content and a DVE template to execute a DVE, which can cause the DVE system to generate new video content that is a modified version of the received video content. For example, the generated video content can include the received video content with local weather content overlaid thereon.

[0015] The scheduling system can create a program schedule, perhaps based on input received from a user (e.g., a producer or technical director) via a user interface. The sequencing system can process records in the program schedule, and based on the processed records, can control one or more components of the VPS, such as the video source and the DVE system, to facilitate generating video content.

[0016] In one example, the VPS can also include a SM system and a character generator. The SM system can obtain a SM content item, and the character generator can then use the SM content item to generate video content that includes the SM content item. Further, the character generator can transmit the video content to the DVE system. The DVE system can receive the video content and can execute a DVE, which causes the DVE system to generate video content that includes the received video content and thus, that also includes the SM content item. The generated video content can serve as or be part of a video program. Thus, in this way, the VPS can integrate a SM content item into a video program.

[0017] In some instances though, it can be desirable for the VPS to modify a SM content item before integrating it into a video program. In one example, the SM system can modify a SM content item by identifying a first element of the SM content item based on the first element being associated with a particular characteristic, and then modifying the selected SM content item by modifying the identified first element of the selected SM content item.

[0018] The first element of the SM content item can take various forms. In one example, the first element can be a text element, such as text published by a publisher in connection with the SM content item. As noted above, the SM system can identify a first element of the selected SM content item based on the first element being associated with a particular characteristic. In one example, the SM system can access a set of elements and the first element being associated with the particular characteristic can mean the first element having a threshold extent of similarity with a second element of the accessed set of elements.

[0019] To illustrate an example of this, consider a scenario where the elements at issue are words. In this scenario, the SM system can identify a first word of the SM content item based on the first word having a threshold extent of similarity with a second word of an accessed set of words. In one example, the accessed set of words can include potentially vulgar or offensive words, and thus it may not be desirable to integrate such words into a video program.

[0020] The threshold extent of similarity can vary depending on a desired tolerance level. For instance, the SM system can identify the first word based on the first word being identical to the second word. Alternatively, the SM system can identify the first word based on the first word and the second word being sufficiently similar despite the words not being identical. This can help the SM system identify a first word that is a slight misspelling or other variation of a second word, for instance.

[0021] As noted above, the SM system can modify the selected SM content item by modifying the identified first element of the selected SM content item. The SM system can modify the identified first element in various ways. For example, the SM system can modify the identified element by removing the identified element from the selected SM content item. To illustrate this, in the case where the identified element is text or an image, the SM system can remove the text or image from the SM content item.

[0022] The VPS can then generate video content that includes the modified SM content item, which can serve as or be part of a video program. Thus, in this way, the VPS can integrate a modified SM content item into a video program. Notably, the SM system can identify an element of a SM content item and can modify the identified element in many various other ways as described throughout this disclosure.

II. Example Architecture

[0023] A. Computing Device

[0024] FIG. 1 is a simplified block diagram of an example computing device 100. The computing device can be configured to perform and/or can perform one or more acts and/or functions, such as those described in this disclosure. The computing device 100 can include various components, such as a processor 102, a data storage unit 104, a communication interface 106, and/or a user interface 108. Each of these components can be connected to each other via a connection mechanism 110.

[0025] In this disclosure, the term "connection mechanism" means a mechanism that facilitates communication between two or more components, devices, systems, or other entities. A connection mechanism can be a relatively simple mechanism, such as a cable or system bus, or a relatively complex mechanism, such as a packet-based communication network (e.g., the Internet). In some instances, a connection mechanism can include a non-tangible medium (e.g., in the case where the connection is wireless).

[0026] The processor 102 can include a general-purpose processor (e.g., a microprocessor) and/or a special-purpose processor (e.g., a digital signal processor (DSP)). The processor 102 can execute program instructions contained in the data storage unit 104 as discussed below.

[0027] The data storage unit 104 can include one or more volatile, non-volatile, removable, and/or non-removable storage components, such as magnetic, optical, and/or flash storage, and/or can be integrated in whole or in part with the processor 102. Further, the data storage unit 104 can take the form of a non-transitory computer-readable storage medium, having stored thereon program instructions (e.g., compiled or non-compiled program logic and/or machine code) that, upon execution by the processor 102, cause the computing device 100 to perform one or more acts and/or functions, such as those described in this disclosure. These program instructions can define and/or be part of a discrete software application. In some instances, the computing device 100 can execute program instructions in response to receiving an input, such as from the communication interface 106 and/or the user interface 108. The data storage unit 104 can also store other types of data, such as those types described in this disclosure.

[0028] The communication interface 106 can allow the computing device 100 to connect with and/or communicate with another other entity according to one or more protocols. In one example, the communication interface 106 can be a wired interface, such as an Ethernet interface or a high-definition serial-digital-interface (HD-SDI). In another example, the communication interface 106 can be a wireless interface, such as a cellular or WI-FI interface. In this disclosure, a connection can be a direct connection or an indirect connection, the latter being a connection that passes through and/or traverses one or more entities, such as a router, switcher, or other network device. Likewise, in this disclosure, a transmission can be a direct transmission or an indirect transmission.

[0029] The user interface 108 can include hardware and/or software components that facilitate interaction between the computing device 100 and a user of the computing device 100, if applicable. As such, the user interface 108 can include input components such as a keyboard, a keypad, a mouse, a touch-sensitive panel, a microphone, and/or a camera, and/or output components such as a display device (which, for example, can be combined with a touch-sensitive panel), a sound speaker, and/or a haptic feedback system.

[0030] The computing device 100 can take various forms, such as a workstation terminal, a desktop computer, a laptop, a tablet, a mobile phone, a set-top box, and/or a television.

[0031] B. Video System

[0032] FIG. 2 is a simplified block diagram of an example video system 200.

[0033] The video system 200 can perform various acts and/or functions related to video content, and can be implemented as a computing system. In this disclosure, the term "computing system" means a system that includes at least one computing device. In some instances, a computing system can include one or more other computing systems.

[0034] The video system 200 can include various components, such as a VPS 202, a VBS 204, and an end-user device 206, each of which can be implemented as a computing system. The video system 200 can also include a connection mechanism 208, which connects the VPS 202 with the VBS 204; and a connection mechanism 210, which connects the VBS 204 with the end-user device 206.

[0035] FIG. 3 is a simplified block diagram of an example VPS 202. The VPS 202 can include various components, such as a video source 302, a SM system 306, a character generator 308, a DVE system 310, a scheduling system 312, and a sequencing system 314, each of which can be implemented as a computing system. The VPS 202 can also include a connection mechanism 316, which connects the video source 302 with the sequencing system 314; a connection mechanism 318, which connects the video source 302 with the DVE system 310; a connection mechanism 322, which connects the SM system 306 with the sequencing system 314; a connection mechanism 324, which connects the SM system 306 with the character generator 308; a connection mechanism 326, which connects the character generator 308 with the sequencing system 314; a connection mechanism 328, which connects the character generator 308 with the DVE system 310; a connection mechanism 330, which connects the DVE system 310 with the sequencing system 314; and a connection mechanism 332, which connects the scheduling system 312 with the sequencing system 314.

[0036] The video source 302 can take various forms, such as a video server, a video camera, a satellite receiver, a character generator, or a DVE system. An example video server is the K2 server provided by Grass Valley of San Francisco, Calif.

[0037] The character generator 308 can take various forms. An example character generator is the VIZ TRIO provided by Viz Rt of Bergen, Norway. Another example character generator is CASPAR CG developed and distributed by the Swedish Broadcasting Corporation (SVT).

[0038] The DVE system 310 can take various forms, such as a production switcher. An example production switcher is the VISION OCTANE production switcher provided by Ross Video Ltd. of Iroquois, Ontario in Canada.

[0039] The scheduling system 312 can take various forms. An example scheduling system is WO TRAFFIC provided by WideOrbit, Inc. of San Francisco, Calif. Another example scheduling system is OSI-TRAFFIC provided by Harris Corporation of Melbourne, Fla.

[0040] The sequencing system 314 can take various forms. A sequencing system is sometimes referred to in the industry as a "production automation system."

[0041] Referring back to FIG. 2, the VBS 204 can include various components, such as a terrestrial antenna or a satellite transmitter, each of which can be implemented as a computing system.

[0042] Each of the video-based entities described in this disclosure can include or be integrated with a corresponding audio-based entity. Also, the video content described in this disclosure can include or be integrated with corresponding audio content.

III. Example Operations

[0043] The video system 200 and/or components thereof can perform various acts and/or functions. These features and related features will now be described.

[0044] The video system 200 can perform various acts and/or functions related to video content. For example, the video system 200 can receive, generate, output, and/or transmit video content that can serve as or be part of a video program (e.g., a news program). In this disclosure, the act of receiving, generating, outputting, and/or transmitting video content can occur in various ways and/or according to various standards. For example, the act of receiving, outputting, and/or transmitting video content can include receiving, outputting, and/or transmitting a video stream representing the video content, such as over Internet Protocol (IP) or in accordance with the high-definition serial digital interface (HD-SDI) standard. Likewise, the act of generating content can include generating a video stream representing the video content. Also, the act of receiving, generating, outputting, and/or transmitting video content can include receiving, generating, outputting, and/or transmitting an encoded or decoded version of the video content.

[0045] The VPS 202 can perform various acts and/or functions related to video content production. For example, the VPS 202 can generate and/or output video content, and can transmit the video content to another entity, such as the VBS 204.

[0046] Referring back to FIG. 3, within the VPS 202, the video source 302 can generate and/or output video content, and can transmit the video content to another entity, such as the DVE system 310. In practice, the VPS 202 is likely to include multiple video sources and corresponding connection mechanisms, each connecting a respective one of the video sources with the DVE system 310.

[0047] As noted above, the video source 302 can take the form of a video server. A video server can record and/or store video content (e.g., in the form of a file). Further, the video server can retrieve stored video content and can use the retrieved video content to generate and/or output a video stream representing the video content. This is sometimes referred to in the industry as the video server playing out the video content. The video server 302 can then transmit the video stream, thereby transmitting the video content, to another entity, such as the DVE system 310.

[0048] The SM system 306 can perform various acts and/or functions related to SM content. In this disclosure, "SM content" is content that has been published on a SM platform, which is a computer-based tool that allows users to create, share, and/or exchange content (e.g., in the form of text, images, and/or videos) in virtual communities on a computer-based network such as the Internet. Examples of SM platforms include TWITTER, YOUTUBE, FACEBOOK, PERISCOPE, INSTAGRAM, MEERKAT, LINKEDIN, and GOOGLE+.

[0049] SM content has become a prominent and influential source of news and entertainment content. Indeed, SM platforms are more and more often a news-breaking source of information. It can thus be beneficial for video content providers to incorporate SM content items into a video program.

[0050] However, video content providers can encounter a number of technological challenges that make it difficult to incorporate SM content items into a video program. For example, receiving, modifying, and integrating SM content items into a video program is generally a time-consuming and labor-intensive process using conventional computing systems and technology platforms. This can be particularly problematic in the context of a news program in which it may be beneficial to quickly receive, modify, and integrate a SM content item into the news program.

[0051] The VPS 202 can overcome these and other technological challenges. Among other things, the VPS 202 can provide technological solutions that allow SM content items to be received, modified, and integrated into a video program in an efficient and timely manner. The described technical solutions can also provide numerous other benefits, which will be apparent from this disclosure.

[0052] In line with the discussion above, the SM system 306 can receive a SM content item and can do so in various ways. For example, the SM system can receive a SM content item by obtaining it from another entity, such as a SM platform. In one example, the SM system 306 can obtain a SM content item directly from a SM platform. In another example, the SM system can obtain a SM content item from a SM platform via a SM dashboard application (e.g., TWEETDECK, CYFE, or HOOTSUITE). In some instances, a SM dashboard application can provide additional searching and browsing functionalities (e.g., based on trend analysis or analytics) that may not be provided by the SM platform itself, and/or can provide access to multiple SM platforms through a single user interface.

[0053] A SM content item can include various elements such as (i) data indicating the SM platform from which the SM content item was received, (ii) data identifying the publisher of the SM content item (e.g., an account identifier, such as a username), (iii) a profile image corresponding to the publisher of the SM content item, (iv) text published by the publisher in connection with the SM content item, (v) an image published by the publisher in connection with the SM content item, (vi) audio content published by the publisher in connection with the SM content item, (vii) video content published by the publisher in connection with the SM content item (viii) a timestamp indicating a time and/or date at which the SM content item was published on the SM platform, (ix) a location (e.g., represented by global positioning system (GPS) coordinates) of the publisher when the SM content item was published, (x) a location at which an aspect of the SM content item occurred (e.g., where video content was recorded or where a photograph was taken), (xi) a timestamp indicating when an aspect of the SM content item occurred, (xii) a number of other users associated with the publisher on a SM platform (e.g., a number of friends or followers), (xiii) an indication of how long the publisher has been a user of a SM platform, (xiv) a number of times the SM content item has been shared (e.g., retweeted) by other users of a SM platform, (xv) a number of posts by the publisher on a SM platform, and/or (xvi) any other data that can be integrated into a video program.

[0054] The SM system can also store, select, and/or retrieve a SM content item, perhaps based on input received from a user (e.g., a producer or technical director) via a user interface. As such, the SM system 306 can store an obtained SM content item in a data storage unit (e.g., a data storage unit of the SM system 306), and can then receive the SM content item by selecting and retrieving it from the data storage unit.

[0055] In some instances, the SM system 306 can select and modify a SM content item. The SM system 306 can select a SM content item in various ways. For example, the SM system 306 can select a SM content item responsive to the SM system 306 performing an action in connection with the SM content item (e.g., responsive to the SM content system 306 receiving or storing the SM content item). In another example, the SM system 306 can select a SM content item based on the SM content item being associated with a particular characteristic (e.g., based on the SM content item being scheduled to be integrated into a video program). In another example, the SM system 306 can, periodically or based on a schedule, select a SM content item for routine processing. As yet another example, the SM system 306 can select a SM content item based on input received from a user via a user interface.

[0056] The SM system 306 can then modify the selected SM content item by identifying a first element of the selected SM content item based on the first element being associated with a particular characteristic, and then modifying the selected SM content item by modifying the identified first element of the selected SM content item.

[0057] The first element of the SM content item can take various forms as described above. In one example, the first element can be a text element, such as text published by a publisher in connection with the SM content item, or a username of the publisher. In another example, the first element can be an image element, such as an image published by a publisher in connection with the SM content item, or a profile image of the publisher.

[0058] As noted above, the SM system 306 can identify a first element of the selected SM content item based on the first element being associated with a particular characteristic. In one example, the SM system 306 can access a set of elements and the first element being associated with the particular characteristic can mean the first element having a threshold extent of similarity with a second element of the accessed set of elements.

[0059] To illustrate an example of this, consider a scenario where the elements at issue are words. In this scenario, the SM system 306 can identify a first word of the SM content item based on the first word having a threshold extent of similarity with a second word of an accessed set of words. In one example, the accessed set of words can include potentially vulgar or offensive words, and thus it may not be desirable to integrate such words into a video program.

[0060] The threshold extent of similarity can vary depending on a desired tolerance level. For instance, the SM system 306 can identify the first word based on the first word being identical to the second word. Alternatively, the SM system 306 can identify the first word based on the first word and the second word being sufficiently similar despite the words not being identical. This can help the SM system 306 identify a first word that is a slight misspelling or other variation of a second word, for instance. For this purpose, the SM system 306 can directly or indirectly use any text comparison and scoring techniques now known in the art or later discovered.

[0061] In one example, the SM system 306 can maintain the list of words and can update it over time as desired, perhaps based on input from a user via a user interface or based on input received from another computing system.

[0062] As noted above, the SM system 306 can access a set of elements and the first element being associated with the particular characteristic can mean the first element having a threshold extent of similarity with a second element of the accessed set of elements. To illustrate another example of this, consider a scenario where the elements at issue are images. In this scenario, the SM system 306 can identify a first image of the SM content item based on the first image having a threshold extent of similarity with a second image of an accessed set of images. In one example, the accessed set of images can include potentially vulgar or offensive images, and thus it may not be desirable to integrate such images into a video program.

[0063] The threshold extent of similarity can vary depending on a desired tolerance level. For instance, the SM system 306 can identify the first image based on the first image being identical to the second image. Alternatively, the SM system 306 can identify the first image based on the first image and the second image being sufficiently similar despite the images not being identical. This can help the SM system 306 identify a first image that is a slight variation of a second image, for instance. For this purpose, the SM system 306 can directly or indirectly use any image comparison and scoring techniques, such as those that utilize image-based fingerprints, now known in the art or later discovered.

[0064] In one example, the SM system 306 can maintain the list of images and can update it over time as desired, perhaps based on input from a user via a user interface or based on input received from another computing system.

[0065] As noted above, the first element being associated with the particular characteristic can include the first element having a threshold extent of similarity with a second element of the accessed set of elements. But in another example, the first element being associated with the particular characteristic can include the first element being stored in a predefined type of field.

[0066] To illustrate an example of this, consider a scenario where the first element at issue is text that is a username of the SM content item. In this scenario, the SM system 306 can identify the text based on the text being stored in a username field (e.g., of a data structure configured to store a SM content item). For privacy or other reasons, it may be undesirable to integrate such text into a video program.

[0067] To illustrate another example of this, consider a scenario where the first element at issue is an image that is a profile image of the SM content item. In this scenario, the SM system 306 can identify the image based on the image being stored in a profile image field. Again, for privacy or other reasons, it may be undesirable to integrate such text into a video program.

[0068] As noted above, the first element being associated with the particular characteristic can include the first element being stored in a predefined type of field. But in another example, the first element being associated with the particular characteristic can include the element having a predefined characteristic.

[0069] To illustrate an example of this, consider a scenario where the first element is text that is a telephone number published by a publisher in connection with the SM content item the SM content item. In this scenario, the SM system 306 can identify the text based on the text having a particular format (e.g., XXX-XXX-XXXX). For privacy or other reasons, it may be undesirable to integrate such text into a video program.

[0070] To illustrate another example of this, consider a scenario where the first element is an image published by a publisher in connection with the SM content item the SM content item. In this scenario, the SM system 306 can identify the image as being a vulgar or offensive image, and thus it may not be desirable to integrate such words into a video program. For this purpose, the SM system 306 can directly or indirectly use any image analysis technique now known in the art or later discovered.

[0071] As noted above, the SM system 306 can modify the selected SM content item by modifying the identified first element of the selected SM content item. The SM system 306 can modify the identified first element in various ways. For example, the SM system 306 can modify the identified element by removing the identified element from the selected SM content item. To illustrate this, in the case where the identified element is text or an image, the SM system 306 can remove the text or image from the SM content item.

[0072] As another example, the SM system 306 can modify the identified element by redacting the identified element of the selected SM content item. To illustrate this, in the case where the element is text or an image, the SM system 306 can redact the text or image by superimposing the text "REDACTED" over some or all of the text or image.

[0073] As another example, the SM system 306 can modify the identified element by replacing the identified element of the selected SM content item with a replacement element. For example, in the case where the element is text or an image, the SM system 306 can replace the text or image with the text "REMOVED."

[0074] Alternatively or additionally, the SM system 306 can modify the SM content item based on input received from a user via a user interface.

[0075] The SM system 306 can also transmit a SM content item to another entity, such as the character generator 308.

[0076] The character generator 308 can use a character generator template and content to generate and/or output video content that includes the content. The character generator template specifies the manner in which the character generator 308 uses the content to generate and/or output the video content. The character generator 308 can create and/or modify a character generator template, perhaps based on input received from a user via a user interface. Further, the character generator 308 can store, select, and/or retrieve a character generator template, perhaps based on input received from a user via a user interface. As such, the character generator 308 can store a character generator template in a data storage unit (e.g., a data storage unit of the character generator 308), and can then receive the character generator template by retrieving it from the data storage unit.

[0077] The character generator 308 can also receive content in various ways. For example, the character generator 308 can receive content by receiving it from another entity, such as the SM system 306. In another example, the character generator 308 can receive content by selecting and retrieving it from a data storage unit (e.g., a data storage unit of the SM system 306).

[0078] The character generator template can specify how the character generator 308 is to receive content. In one example, the character generator template can do so by specifying that the character generator 308 is to receive content on a particular input of the character generator 308 (e.g., an input that maps to a particular entity, such as the SM system 306). In another example, the character generator template can do so by specifying that the character generator 308 is to receive content by retrieving it from a particular location of a particular data storage unit (e.g., a data storage unit of the character generator 308).

[0079] In one example, the character generator 308 can use an ordered set of content items to generate video content that includes the content items in the specified order. This type of generated video content is sometimes referred to in the industry as a "ticker." The content items can include various types of content, such as text and/or images. In one example, each of these content items can be a SM content item. The ordered set of content items can be stored in various forms, such as in the form of an Extensible Markup Language (XML) file.

[0080] After the character generator 308 generates and/or outputs video content, the character generator 308 can transmit the video content to another entity, such as the DVE system 310, and/or can store the video content in a data storage unit (e.g., a data storage unit of the character generator 308).

[0081] As such, in one example, the character generator 308 can receive a SM content item, can use the SM content item to generate and/or output video content that includes the SM content item, and can transmit the video content to the DVE system 310.

[0082] The DVE system 310 can use a DVE template to generate and/or output video content. This is sometimes referred to in the industry as the DVE system "executing a DVE." In some instances, the DVE system 310 can execute multiple DVEs in serial or overlapping fashion.

[0083] The DVE template specifies the manner in which the DVE system 310 generates and/or outputs video content. The DVE system 310 can create and/or modify a DVE template, perhaps based on input received from a user via a user interface. Further, the DVE system 310 can store and/or retrieve a DVE template, perhaps based on input received from a user via a user interface. As such, the DVE system 310 can store a DVE system template in a data storage unit (e.g., a data storage unit of the DVE system 310), and can then receive the DVE template by selecting and retrieving it from the data storage unit.

[0084] In some instances, the DVE system 310 can use the DVE template and content to generate and/or output video content that includes the content. The DVE system 310 can receive content in various ways. For example, the DVE system 310 can do so by receiving it from another entity, such as the video source 302 and/or the character generator 308. In another example, the DVE system 310 can do so by selecting and retrieving it from a data storage unit (e.g., a data storage unit of the DVE system 310).

[0085] The DVE template can specify how the DVE system 310 is to receive content. In one example, the DVE template can do so by specifying that the DVE system 310 is to receive content on a particular input of the DVE system 310 (e.g., an input that maps to a particular entity, such as the video source 302 or the character generator 308). In another example, the DVE template can do so by specifying that the DVE system 310 is to receive content by retrieving it from a particular location of a particular data storage unit (e.g., a data storage unit of the DVE system 310).

[0086] A DVE template can be configured in various ways, which can allow the DVE system 310 to execute various types of DVEs. In one example, a DVE template can specify that the DVE system 310 is to receive video content from the video source 302 and other content (e.g., local weather content) from a data storage unit of the DVE system, and is to overlay the other content on the video content, thereby generating a modified version of the video content. As such, in one example, the DVE system 310 can generate video content by modifying video content.

[0087] FIGS. 4A and 4B help illustrate this concept of overlaying other content on video content. FIG. 4A is a simplified depiction of an example frame 400 of video content. Frame 400 includes content 402, but does not include other content overlaid on content 402. For comparison, FIG. 4B is a simplified depiction of another example frame 450 of video content. Frame 450 includes content 452 and other content 454 overlaid on content 452.

[0088] In another example, a DVE template can specify that the DVE system 310 is to receive first video content from the video source 302 and second video content from the character generator 308, and is to overlay the second video content on the first video content, thereby generating a modified version of the first video content.

[0089] In another example, a DVE template can specify that the DVE system 310 is to receive first video content from the video source 302 and second video content from the character generator 308, and is to scale-down and re-position the first video content and the second video content, each in a respective one of two windows positioned side-by-side. As such, the DVE system 310 can generate video content by scaling and/or re-positioning video content.

[0090] After the DVE system 310 generates and/or outputs the video content, the DVE system 310 can transmit the video content to another entity, such as the VBS 204, or can store the video content in a data storage unit (e.g., a data storage unit of the DVE system 310).

[0091] As such, in one example, the DVE system 310 can receive first video content including a SM content item, can use the first video content to generate and/or output second video content that includes the SM content item. This is an example way in which the VPS 202 can integrate a SM content item into a video program.

[0092] The VPS 202 can also integrate a SM content item into a video program in other ways. For example, in the case where the video source 302 is a video camera, the SM system 306 can include a display device that is located within a field of the view of the video camera while the video camera records video content that serves as or is made part of the video program. In one example, the display device can be touch-enabled, which can allow a user (e.g., a news anchor) to interact with the SM content item. To facilitate the user's interaction with the SM content item, the display device and/or other components of the SM system 306 can be programmed with instructions that cause particular actions in response to particular touch commands.

[0093] In one example, the display device can initially display multiple small tiles, each representing a different SM content item. In this example, the SM content items can relate to weather conditions captured in photographs published on SM platforms by various different publishers. As such, each tile can display a different photograph. The position and ordering of the small tiles can be determined by a character generator template and/or a DVE template. Either template can also include programming instructions that can allow the commands provided via the touch screen display to cause predefined actions for the displayed SM content items. For example, if a meteorologist taps on one of the small items a first time, the programming instructions can cause the tile to expand to enlarge the photograph and perhaps display additional elements of, or information associated with, the SM content item (e.g., a username, time, location, and/or text published in connection with the SM content item). Other commands can cause an expanded tile to return to its initial size and position. As the meteorologist interacts with the SM content items displayed on the display device, the video camera can generate video content including these interactions and thereby integrate the SM content items into the video program.

[0094] The scheduling system 312 can perform various acts and/or functions related to the scheduling of video content production. For example, the scheduling system 312 can create and/or modify a program schedule of a video program, perhaps based on input received from a user via a user interface. Further, the scheduling system 312 can store and/or retrieve a program schedule, perhaps based on input received from a user via a user interface. As such, the scheduling system 312 can store a program schedule in a data storage unit (e.g., a data storage unit of the scheduling system 312), and can then receive the program schedule by selecting and retrieving it from the data storage unit. The scheduling system 312 can also transmit a program schedule to another entity, such as the sequencing system 314.

[0095] The sequencing system 314 can process records in the program schedule. This can cause the sequencing system 314 to control one or more other components of the VPS 202 to facilitate the VPS 202 generating and/or outputting video content, which can serve as or be part of a video program. For example, the sequencing system 314 can control the video source 302, the SM system 306, the character generator 308, and/or the DVE system 310 to perform the various acts and/or functions described in this disclosure.

[0096] The sequencing system 314 can receive a program schedule in various ways. For example, the sequencing system 314 can do so by receiving it from another entity, such as the scheduling system 312. In another example, the character generator 308 can do so by selecting and retrieving it from a data storage unit (e.g., a data storage unit of the scheduling system 312).

[0097] A program schedule (sometimes referred to in the industry as a "rundown") serves as a schedule or outline of a video program and can include multiple records. A video program can be conceptually divided into multiple logically-separated portions (sometimes referred to in the industry as "stories"). As such, each portion of the video program can be represented by a separate record of the program schedule. In some cases, each record can also include one or more sub-records. Each record (including a sub-record) can include various types of data.

[0098] FIG. 5 is a simplified diagram of an example program schedule 500. The program schedule 500 includes ten records represented as ten ordered rows. Each record corresponds to a respective portion of a video program, except for one which corresponds to a commercial break. For each portion, the respective record specifies at least one data item that corresponds to that portion of the video program. In particular, each record specifies at least one of a story title, a video content item identifier, a duration, and a DVE identifier (which can serve as an instruction to execute the identified DVE).

[0099] A video content item can consist of logically-related video content. For example, a video content item can be a commercial. As another example, a video content item can be a portion of a television program that is scheduled between two commercial breaks. This is sometimes referred to in the industry as a "program segment."

[0100] As shown in FIG. 5, the first record specifies a story title of STORY A, a video content identifier of VCI ID A, a duration of 00:02:00:00 (in hours::minutes::seconds::frames format), and a DVE identifier of DVE ID A. As such, upon the sequencing system 314 processing the first record, the sequencing system 314 can cause the video source 302 to playout a video content item identified by the identifier VCI ID A for two minutes, and can further cause the DVE system 310 to execute a DVE identified by the identifier DVE ID A, which for example, can cause the DVE system 310 to overlay content on the identified video-content item.

[0101] The program schedule 500 has been greatly simplified for the purposes of illustrating certain features. In practice, a program schedule is likely to include significantly more data.

[0102] In some instances, the sequencing system 314 can process a next record in the program schedule based on a trigger event. In one example, the trigger event can be the sequencing system 314 completing one or more actions related to a current record in the program schedule. In another example, the trigger event can be the sequencing system 314 receiving input from a user via a user interface.

[0103] Referring back to FIG. 2, the VBS 204 can receive video content from the VPS 202, which in turn can transmit the video content to the end-user device 206 for presentation of the video content to an end user. In practice, the VBS 204 can transmit video content to a large number of end-user devices for presentation of the video content to a large number of end users. The VBS 204 can transmit video content to the end-user device 206 in various ways. For example, VBS 204 can transmit video content to the end-user device 206 over-the-air or via a packet-based network such as the Internet. The end-user device 206 can receive video content from the VBS 204, and can present the video content to an end user via a user interface.

[0104] FIG. 6 is a flow chart illustrating an example method 600. At block 602, the method 600 can include selecting, by a computing system, a SM content item. At block 604, the method 600 can include identifying, by the computing system, an element of the selected SM content item based on the element being associated with a particular characteristic. At block 606, the method 600 can include modifying, by the computing system, the selected SM content item by modifying the identified element of the selected SM content item. At block 608, the method 600 can include generating, by the computing system, video content that includes the modified SM content item.

IV. Example Variations

[0105] Although some of the acts and/or functions described in this disclosure have been described as being performed by a particular entity, the acts and/or functions can be performed by any entity, such as those entities described in this disclosure. Further, although the acts and/or functions have been recited in a particular order, the acts and/or functions need not be performed in the order recited. However, in some instances, it can be desired to perform the acts and/or functions in the order recited. Further, each of the acts and/or functions can be performed responsive to one or more of the other acts and/or functions. Also, not all of the acts and/or functions need to be performed to achieve one or more of the benefits provided by this disclosure, and therefore not all of the acts and/or functions are required.

[0106] Although certain variations have been discussed in connection with one or more example of this disclosure, these variations can also be applied to all of the other examples of this disclosure as well.

[0107] Although select examples of this disclosure have been described, alterations and permutations of these examples will be apparent to those of ordinary skill in the art. Other changes, substitutions, and/or alterations are also possible without departing from the invention in its broader aspects as set forth in the following claims.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed