Figure Interactive Systems And Methods

LIN; Sheng-Chun ;   et al.

Patent Application Summary

U.S. patent application number 12/782733 was filed with the patent office on 2011-06-16 for figure interactive systems and methods. Invention is credited to Yu-Chuan CHANG, Yu-Shiang HUNG, Sheng-Chun LIN.

Application Number20110143632 12/782733
Document ID /
Family ID44143459
Filed Date2011-06-16

United States Patent Application 20110143632
Kind Code A1
LIN; Sheng-Chun ;   et al. June 16, 2011

FIGURE INTERACTIVE SYSTEMS AND METHODS

Abstract

Figure interactive systems and methods are provided. The system includes at least a base device. The base device includes a storage unit, a detecting unit, and a processing unit. The storage unit stores a content database recording scenario data corresponding to a plurality of figures. The detecting unit respectively detects identification data of at least a first figure and a second figure. The processing unit respectively retrieves the scenario data of the first figure and the second figure from the content database, dynamically generates an interactive instruction set for the first figure and the second figure, and enables the first figure and the second figure to interact according to the interactive instruction set.


Inventors: LIN; Sheng-Chun; (Taipei City, TW) ; CHANG; Yu-Chuan; (Kaohsiung City, TW) ; HUNG; Yu-Shiang; (Taipei City, TW)
Family ID: 44143459
Appl. No.: 12/782733
Filed: May 19, 2010

Current U.S. Class: 446/268
Current CPC Class: A63H 2200/00 20130101; A63H 3/28 20130101
Class at Publication: 446/268
International Class: A63H 3/00 20060101 A63H003/00

Foreign Application Data

Date Code Application Number
Dec 10, 2009 TW 98142210

Claims



1. A figure interactive system, comprising: at least a first figure and a second figure; and a base device, comprising: a storage unit storing a content database; a detecting unit respectively detecting identification data of the first figure and the second figure; and a processing unit respectively retrieving scenario data corresponding to the first figure and the second figure from the content database according to the identification data of the first figure and the second figure, dynamically generating an interactive instruction set for the first figure and the second figure according to the scenario data corresponding to the first figure and the second figure, and enabling the first figure and the second figure to interact with each other according to the interactive instruction set.

2. The system of claim 1, wherein the processing unit further transmits the identification data of the first figure and the second figure to a server via a network, and the server respectively retrieves the scenario data corresponding to the first figure and the second figure according to the identification data of the first figure and the second figure, and transmits the scenario data corresponding to the first figure and the second figure to the content database of the storage unit via the network.

3. The system of claim 2, wherein the processing unit further receives renewed scenario data corresponding to the first figure and the second figure from the server via the network, and stores the renewed scenario data to the content database.

4. The system of claim 1, wherein the interactive instruction set further comprises an action drive command, and the first figure or the second figure respectively has at least a drive component for receiving the interactive instruction set from the base device, and driving the first figure or the second figure to perform an operation according to the action drive command in the interactive instruction set.

5. The system of claim 1, wherein the base device further comprises a display unit for displaying dialogues or images in the interactive instruction set.

6. The system of claim 1, wherein the base device further comprises a speaker for playing dialogues or sound effects in the interactive instruction set.

7. The system of claim 1, wherein during the interactive instruction set is generated, the processing unit selects the first figure from the start, retrieves first interactive data for a specific topic for the first figure from the scenario data corresponding to the first figure, retrieves second interactive data corresponding to the first interactive data for the specific topic for the second figure from the scenario data corresponding to the second figure, and combines the first interactive data and the second interactive data as the interactive instruction set.

8. A figure interactive system, comprising: at least a first figure and a second figure; a base device, comprising a detecting unit and a first communication unit, wherein the detecting unit respectively detects identification data of the first figure and the second figure; and an electronic device, comprising a second communication unit for communicating with the first communication unit via a communication connection, a storage unit for storing a content database, and a processing unit for respectively retrieving scenario data corresponding to the first figure and the second figure from the content database, dynamically generating an interactive instruction set for the first figure and the second figure according to the scenario data corresponding to the first figure and the second figure, and transmitting the interactive instruction set to the base device, wherein the base device enables the first figure and the second figure to interact with each other according to the interactive instruction set.

9. The system of claim 8, wherein the processing unit transmits the identification data of the first figure and the second figure to a server via a network, and the server respectively retrieves the scenario data corresponding to the first figure and the second figure according to the identification data of the first figure and the second figure, and transmits the scenario data corresponding to the first figure and the second figure to the content database of the storage unit via the network.

10. The system of claim 9, wherein the processing unit further receives renewed scenario data corresponding to the first figure and the second figure from the server via the network, and stores the renewed scenario data to the content database.

11. The system of claim 8, wherein the interactive instruction set further comprises an action drive command, and the first figure or the second figure respectively has at least a drive component for receiving the interactive instruction set from the base device, and driving the first figure or the second figure to perform an operation according to the action drive command in the interactive instruction set.

12. The system of claim 8, wherein during the interactive instruction set is generated, the processing unit selects the first figure from the start, retrieves first interactive data for a specific topic for the first figure from the scenario data corresponding to the first figure, retrieves second interactive data corresponding to the first interactive data for the specific topic for the second figure from the scenario data corresponding to the second figure, and combines the first interactive data and the second interactive data as the interactive instruction set.

13. A figure interactive method, comprising: respectively detecting identification data of a first figure and a second figure; respectively retrieving scenario data corresponding to the first figure and the second figure from a content database according to the identification data of the first figure and the second figure; dynamically generating an interactive instruction set for the first figure and the second figure according to the scenario data corresponding to the first figure and the second figure; and enabling the first figure and the second figure to interact with each other according to the interactive instruction set.

14. The method of claim 13, further comprising: transmitting the identification data of the first figure and the second figure to a server via a network, wherein the server respectively retrieves the scenario data corresponding to the first figure and the second figure according to the identification data of the first figure and the second figure; and receiving the scenario data corresponding to the first figure and the second figure via the network as the content database.

15. The method of claim 14, further comprising: receiving renewed scenario data corresponding to the first figure and the second figure from the server via the network; and storing the renewed scenario data to the content database.

16. The method of claim 13, further comprising: respectively transmitting the interactive instruction set to at least a drive component of the first figure or the second figure; and respectively driving the first figure or the second figure to perform an operation according to an action drive command in the interactive instruction set.

17. The method of claim 13, further comprising displaying dialogues or images in the interactive instruction set via a display unit.

18. The method of claim 13, further comprising playing dialogues or sound effects in the interactive instruction set via a speaker.

19. The method of claim 13, wherein during the interactive instruction set is generated, the method further comprises the steps of: selecting the first figure from the start; retrieving first interactive data for a specific topic for the first figure from the scenario data corresponding to the first figure; retrieving second interactive data corresponding to the first interactive data for the specific topic for the second figure from the scenario data corresponding to the second figure; and combining the first interactive data and the second interactive data as the interactive instruction set.

20. A machine-readable storage medium comprising a computer program, which, when executed, causes a device to perform a figure interactive method, and the method comprises: respectively detecting identification data of a first figure and a second figure; respectively retrieving scenario data corresponding to the first figure and the second figure from a content database according to the identification data of the first figure and the second figure; dynamically generating an interactive instruction set for the first figure and the second figure according to the scenario data corresponding to the first figure and the second figure; and enabling the first figure and the second figure to interact with each other according to the interactive instruction set.
Description



CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority of Taiwan Patent Application No. 098142210, filed on Dec. 10, 2009, the entirety of which is incorporated by reference herein.

BACKGROUND OF THE INVENTION

[0002] 1. Field of the Invention

[0003] The disclosure relates generally to figure interactive systems and methods, and more particularly, to systems and methods that detect a plurality of figures and dynamically generate an interactive instruction set for the detected figures.

[0004] 2. Description of the Related Art

[0005] Figures or dolls are popular items. In addition to static figures, electronic figures have been developed. Electronic figures can be manipulated by electronic signals to increase applications thereof.

[0006] As an example, a figure device or a electronic figure, supporting multiple instant communication software, can connect to a personal computer, such that notifications can be performed when messages or new email messages are received, and when the statuses of friends becomes on-line statuses in the instant communication software. For example, required functions of an electronic rabbit figure can be set via a computer, and a server can transmit related data, such as weather forecasts, or head-line news to the electronic rabbit figure, so that data is displayed via the electronic rabbit figure.

[0007] Generally, conventional electronic figures can only receive fixed messages, and perform related operations according to the received messages. Some electronic figures can perform related operations, such as music playback and dancing based on predefined programs. However, since these programs are fixed and burned into the electronic figures, operating flexibility of the electronic figures is limited, thus hindering popularity among users and development of the electronic figures. Accordingly, with limited variability, users often quickly lose interest in the electronic figures. Currently, there is no technology to automatically detect a plurality of figures and dynamically generate interactive content (not the fixed programs/operations in conventional electronic figures) in the field.

BRIEF SUMMARY OF THE INVENTION

[0008] Figure interactive systems and methods are provided.

[0009] An embodiment of a figure interactive system includes at least a base device. The base device includes a storage unit, a detecting unit, and a processing unit. The storage unit stores a content database. The detecting unit respectively detects identification data of at least a first figure and a second figure. The processing unit respectively retrieves scenario data corresponding to the first figure and the second figure from the content database, dynamically generates an interactive instruction set for the first figure and the second figure according to the scenario data corresponding to the first figure and the second figure, and enables the first figure and the second figure to interact with each other according to the interactive instruction set.

[0010] Another embodiment of a figure interactive system includes at least a first figure and a second figure, a base device, and an electronic device. The base device includes at least a detecting unit, and a first communication unit, wherein the detecting unit respectively detects identification data of the first figure and the second figure. The electronic device at least includes a second communication unit which can communicate with the first communication unit via a communication connection, a storage unit storing a content database, and a processing unit which respectively retrieves scenario data corresponding to the first figure and the second figure from the content database. Also, the processing unit dynamically generates an interactive instruction set for the first figure and the second figure according to the scenario data corresponding to the first figure and the second figure, and transmits the interactive instruction set to the base device. The base device enables the first figure and the second figure to interact with each other according to the interactive instruction set.

[0011] In an embodiment of a figure interactive method, identification data of at least a first figure and a second figure is respectively detected. Then, scenario data corresponding to the first figure and the second figure are respectively retrieved from a content database. The interactive instruction set for the first figure and the second figure is dynamically generated according to the scenario data corresponding to the first figure and the second figure, and the first figure and the second figure are enabled to interact with each other according to the interactive instruction set.

[0012] In some embodiments, the identification data of the first figure and the second figure can be transmitted to a server via a network. The server can respectively retrieve the scenario data corresponding to the first figure and the second figure according to the identification data of the first figure and the second figure, and transmit the scenario data corresponding to the first figure and the second figure to the base device as the content database via the network. In some embodiments, renewed scenario data corresponding to the first figure and the second figure can be also received via the network, and stored to the content database.

[0013] In some embodiments, during the interactive instruction set is generated, the first figure can be selected from the start, and first interactive data for a specific topic is retrieved for the first figure from the scenario data corresponding to the first figure. Then, second interactive data corresponding to the first interactive data for the specific topic is retrieved for the second figure from the scenario data corresponding to the second figure. The first interactive data and the second interactive data can be added to the interactive instruction set. In some embodiments, the selected figure, the specific topic, and/or the first interactive data can be randomly selected, or selected according to a specific order.

[0014] Figure interactive methods may take the form of a program code embodied in a tangible media. When the program code is loaded into and executed by a machine, the machine becomes an apparatus for practicing the disclosed method.

BRIEF DESCRIPTION OF THE DRAWINGS

[0015] The invention will become more fully understood by referring to the following detailed description with reference to the accompanying drawings, wherein:

[0016] FIG. 1A is a schematic diagram illustrating an embodiment of a figure interactive system of the invention;

[0017] FIG. 1B is a schematic diagram illustrating another embodiment of a figure interactive system of the invention;

[0018] FIG. 2 is a schematic diagram illustrating an embodiment of the structure of a server of the invention;

[0019] FIG. 3 is a schematic diagram illustrating an embodiment of the structure of a base device of the invention;

[0020] FIG. 4 is a flowchart of an embodiment of a figure interactive method of the invention; and

[0021] FIG. 5 is a flowchart of an embodiment of a method for generating an interactive instruction set of the invention.

DETAILED DESCRIPTION OF THE INVENTION

[0022] Figure interactive systems and methods are provided.

[0023] FIG. 1A is a schematic diagram illustrating an embodiment of a figure interactive system of the invention.

[0024] The structure of the figure interactive system comprises a server 1000 and a base device 2000. The base device 2000 can simultaneously detect identification data of a plurality of figures (such as F1 and F2), and connect to the server 1000 via a network 3000. It is noted that, only two figures are disclosed in this embodiment, however, the invention is not limited thereto.

[0025] FIG. 2 is a schematic diagram illustrating an embodiment of the structure of a server of the invention.

[0026] The server 1000 may be a processor-based electronic device, such as general purpose computer, a personal computer, a notebook, or a workstation. The server 1000 at least comprises a scenario database 1100. The scenario database 1100 can comprise scenario data corresponding to a plurality of figures respectively. It is understood that, in some embodiments, the scenario data can comprise dialogues, images, sound effects, music, light signals, and/or actions of the figures, such as swinging, vibrating, rotating, beating and movements, among others.

[0027] FIG. 3 is a schematic diagram illustrating an embodiment of the structure of a base device of the invention.

[0028] The base device 2000 can comprise a detecting unit 2100, a storage unit 2200, and a processing unit 2300. It is understood that, in some embodiments, the identification data of the figure can be detected by an RFID (Radio-Frequency Identification), an IR (Infrared) communication recognition system, a USB (Universal Serial Bus) wired/wireless communication recognition system, a 2-dimension/3-dimension barcode recognition system, recognition software and related communication interfaces, or other recognition systems/manners. When several figures are placed on or close to the base device 2000, the detecting unit 2100 can simultaneously detect the identification data of the figures. The storage unit 2200 can at least comprise a content database 2210. The content database 2210 can store the scenario data (such as 2211 and 2212) corresponding to the respective figures. Similarly, in some embodiments, the scenario data can comprise dialogues, images, sound effects, music, light signals, and/or actions. The content database 2210 can further store an interactive instruction set 2220 corresponding to at least two figures. It is noted that, the interactive instruction set 2220 can be dynamically generated according to the scenario data in the content database 2210. The generation and use of the interactive instruction set 2220 are discussed later. The processing unit 2300 performs the figure interactive method of the invention, which will be discussed further in the following paragraphs.

[0029] FIG. 1B is a schematic diagram illustrating another embodiment of a figure interactive system of the invention. The figure interactive system comprises a server 1000, a base device 2000, and an electronic device 4000. The base device 2000 can simultaneously detect the identification data of several figures, such as F1 and F2, and communicate with the electronic device 4000 via a communication connection. The electronic device 4000 couples to the server 1000 via a network 3000. It is noted that, in this embodiment, the base device 2000 at least comprises the detecting unit 2100 in FIG. 3, and a first communication unit (not shown), and the electronic device 4000 at least comprises the storage unit 2200 and the processing unit 2300 in FIG. 3, and a second communication unit (not shown). The functions and features of the detecting unit 2100, the storage unit 2200, and the processing unit 2300 are similar to that disclosed in FIG. 3, and omitted herefrom. The electronic device 4000 may be a general purpose computer, a personal computer, a notebook, a netbook, a handheld computer, or a PDA (Personal Digital Assistant). The communication connection may be an RS232 connection, a USB communication connection, or an RFID communication connection. The first/second communication unit corresponding to the above communication connections may be an RS232 interface, a USB communication interface, or an RFID communication interface.

[0030] FIG. 4 is a flowchart of an embodiment of a figure interactive method of the invention. The figure interactive method of the invention can enable multiple figures to interact with each other. It is understood that, in this embodiment, a first figure and a second figure are used for explanation, but the invention is not limited thereto.

[0031] In step S4100, the identification data of the first figure and the second figure is respectively detected by the base device. Similarly, when several figures are placed on or close to the base device, the detecting unit of the base device can simultaneously detect the identification data of the figures. In step S4200, the scenario data corresponding to the first figure and the second figure are respectively retrieved from a content database according to the identification data of the first figure and the second figure. For example, the scenario data corresponding to the first figure and the second figure can be retrieved from the content database 2210 via the base device 2000 or the electronic device 4000. When the scenario data is retrieved via the electronic device 4000, the electronic device 4000 can transmit the scenario data corresponding to the first figure and the second figure to the base device 2000. In step S4300, the interactive instruction set for the first figure and the second figure is dynamically generated according to the scenario data corresponding to the first figure and the second figure, and in step S4400, the first figure and the second figure are enabled to interact with each other according to the interactive instruction set.

[0032] In other embodiments, the identification data of the first figure and the second figure can be transmitted to the server via the network from the base device or the electronic device before step S4200. After the identification data of the first figure and the second figure is received, the server can respectively retrieve the corresponding scenario data from the scenario database according to the identification data of the first figure and the second figure, and transmits the scenario data corresponding to the first figure and the second figure to the base device or the electronic device, such that the scenario data corresponding to the first figure and the second figure is stored into the content database.

[0033] It is understood that, in some embodiments, each figure may have at least a drive component (not shown). The drive component can receive part of the interactive instruction set relating to the figure from the base device, and drive commands according to the actions in the received interactive instruction set, such that the figure and/or at least one component of the figure can be accordingly driven to perform an operation. In some embodiments, the base device or the figure may comprise a display unit (not shown in FIG. 3), for displaying the dialogues (ex. texts), symbols, animations, colors, and/or images in the interactive instruction set. In some embodiments, the base device or the figure may comprise a speaker (not shown in FIG. 3), to play the dialogues by voices, music, and/or sound effects in the interactive instruction set.

[0034] It is noted that, in some embodiments, when the base device or the electronic device detects and recognize the figures, the base device or the electronic device can immediately transmit the identification data of the figures to the server, and the server searches, receives or generates the scenario data corresponding to the figures, dynamically generates the interactive instruction set according to the scenario data, and transmits the interactive instruction set back to the base device, such that the first figure and the second figure can interact with each other according to the interactive instruction set. In some embodiments, the base device or the electronic device can store the scenario data corresponding to the figures in advance. When the identification data of the figures is detected, the scenario data corresponding to the figures can be directly retrieved from the content database of the storage unit in the base device or the electronic device, and the interactive instruction set can be dynamically generated according to the retrieved scenario data. In some embodiments, the base device or the electronic device can periodically or randomly receive renewed scenario data corresponding to the figures via the network, and store the renewed scenario data to the content database.

[0035] FIG. 5 is a flowchart of an embodiment of a method for generating an interactive instruction set of the invention.

[0036] In step S5100, a specific topic is determined. It is understood that, in some embodiments, the specific topic may be a different classification of scenario, such as a narrative, an emotional, or a combat scenario. In some embodiments, the specific topic can be determined randomly. In step S5200, a figure is selected from the figures for interaction. The selected figure may be an initial figure. Similarly, in some embodiments, the initial figure can be randomly selected from the figures. Then, in step S5300, interactive data for the specific topic is retrieved for the initial figure from the scenario data corresponding to the initial figure. Similarly, in some embodiments, the interactive data for the initial figure can be randomly selected from the scenario data corresponding to the initial figure. As described, in some embodiments, the scenario data/interaction data can comprise dialogues, images, sound effects, and/or actions. After the interactive data for the initial figure is determined, in step S5400, interactive data, corresponding to the interactive data for the initial figure, for the specific topic is retrieved for another figure (called associated figure) from the scenario data corresponding to the associated figure. It is understood that, in some embodiments, the scenario data/interaction data can respectively define a tag. Relationships among the scenario data/interaction data can be established via the tags. In step S5500, it is determined whether the generation of the interactive instruction set is complete. It is noted that, the determination of whether the generation of the interactive instruction set is complete may be different based on different requirements and applications. In some embodiments, the scenario data corresponding to the initial figure and/or the associated figure may define a terminal tag. When the interactive data corresponding to the initial figure and/or the associated figure has the terminal tag, the generation of the interactive instruction set is complete. When the generation of the interactive instruction set is not complete (No in step S5500), in step S5600, interactive data, corresponding to the interactive data for the associated figure, for the specific topic is retrieved for the initial figure from the scenario data corresponding to the initial figure, and the procedure returns to step S5400. When the generation of the interactive instruction set is complete (Yes in step S5500), in step S5700, the interactive data corresponding to the initial figure and the associated figure are combined as the interactive instruction set.

[0037] It is noted that, the embodiment of FIG. 5 is used to generate the interactive instruction set for a specific topic. However, in some embodiments, the specific topic is not needed to be determined in advance, and the interactive instruction set corresponding to the respective figures can be directly determined according to the respective scenario data.

[0038] Therefore, the figure interactive systems and methods can dynamically generate an interactive instruction set for multiple figures, and enable the figures to interact with each other according to the interactive instruction set. Thus operating flexibility of the figures is increased.

[0039] Figure interactive methods, or certain aspects or portions thereof, may take the form of a program code (i.e., executable instructions) embodied in tangible media, such as floppy diskettes, CD-ROMS, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine thereby becomes an apparatus for practicing the methods. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to the application of specific logic circuits.

[0040] While the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. Those who are skilled in this technology can still make various alterations and modifications without departing from the scope and spirit of this invention. Therefore, the scope of the present invention shall be defined and protected by the following claims and their equivalents.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed