Detecting And Quantifying A Liquid And/or Food Intake Of A User Wearing A Hearing Device

Feilner; Manuela

Patent Application Summary

U.S. patent application number 17/690958 was filed with the patent office on 2022-09-22 for detecting and quantifying a liquid and/or food intake of a user wearing a hearing device. The applicant listed for this patent is SONOVA AG. Invention is credited to Manuela Feilner.

Application Number20220301683 17/690958
Document ID /
Family ID1000006253766
Filed Date2022-09-22

United States Patent Application 20220301683
Kind Code A1
Feilner; Manuela September 22, 2022

DETECTING AND QUANTIFYING A LIQUID AND/OR FOOD INTAKE OF A USER WEARING A HEARING DEVICE

Abstract

A method for detecting and quantifying a liquid and/or food and/or medication intake of a user wearing a hearing device which comprises at least one microphone. The method comprises: receiving an audio signal from the at least one microphone and/or a sensor signal from at least one further sensor; and collecting and analyzing the received audio signal and/or further sensor signals so as to detect each time the user drinks and/or takes medication and/or eats something, wherein drinking and/or medication intake is distinguished from eating and/or wherein drinking is distinguished from medication intake, and so as to determine values indicative of how often this is detected and/or a respective amount of liquid and/or food and/or medication ingested by the user.


Inventors: Feilner; Manuela; (Egg b. Zurich, CH)
Applicant:
Name City State Country Type

SONOVA AG

Staefa

CH
Family ID: 1000006253766
Appl. No.: 17/690958
Filed: March 9, 2022

Current U.S. Class: 1/1
Current CPC Class: G16H 50/30 20180101; G16H 20/10 20180101; G16H 20/60 20180101; G06N 3/04 20130101; H04R 1/08 20130101; H04R 1/1016 20130101
International Class: G16H 20/60 20060101 G16H020/60; G16H 50/30 20060101 G16H050/30; G16H 20/10 20060101 G16H020/10; H04R 1/10 20060101 H04R001/10; H04R 1/08 20060101 H04R001/08; G06N 3/04 20060101 G06N003/04

Foreign Application Data

Date Code Application Number
Mar 22, 2021 EP EP21163914

Claims



1. A method for detecting and quantifying a liquid and/or food and/or medication intake of a user wearing a hearing device which comprises at least one microphone, the method comprising: receiving an audio signal from the at least one microphone and/or a sensor signal from at least one further sensor; collecting and analyzing the received audio signal and/or further sensor signals so as to detect each time the user drinks and/or takes medication and/or eats something, wherein drinking and/or medication intake is distinguished from eating and/or wherein drinking is distinguished from medication intake, and so as to determine values indicative of how often this is detected and/or a respective amount of liquid and/or food and/or medication ingested by the user; wherein the step of analyzing includes applying one or more machine learning algorithms in the hearing device or in a hearing system, part of which the hearing device is, or in a remote server or cloud connected to it; and storing the determined values in the hearing system and, based on the stored values, generating a predetermined type of output.

2. The method of claim 1, wherein, in the step of analyzing, at least one of the machine learning algorithms is applied in its training phase so as to learn user-specific manners of drinking and/or eating and/or medication intake; and the newly learned user-specific manners are incorporated in the future analysis step.

3. The method of claim 1, wherein, in the step of analyzing, two or more different phases of drinking or, respectively, eating or, respectively, medication intake, are distinguished in the course of detecting a liquid and/or food and/or medication intake of the user; and the analysis of the different phases is based on signals from correspondingly different sensors and/or is performed by correspondingly different machine learning algorithms.

4. The method of claim 3, wherein the different phases of drinking and/or medication intake comprise one or more of the following phases: bringing a source of liquid in contact with the mouth, based at least on a signal from at least one movement sensor and/or orientation sensor sensing a corresponding movement of some upper body part of the user; tilting of the user's head, based at least on a signal from at least one movement sensor sensing a corresponding movement of the head of the user and/or based at least on a signal from at least one orientation sensor sensing a corresponding orientation of the head of the user relative to the surface of the earth; gulping or sipping the liquid and/or swallowing the medication, based at least on a signal from the at least one microphone and/or on a signal from at least one movement sensor sensing a corresponding movement of the user's throat, head and/or breast; removing the mouth from the source of liquid, based at least on a signal from the at least one microphone and/or on a signal from at least one movement sensor sensing a corresponding movement of some upper body part of the user.

5. The method of claim 4, wherein the different phases of medication intake further comprise one or more of the following phases: bringing, before the source of liquid is brought in contact with the mouth, a medication in contact with the mouth and/or inserting the medication into the mouth, based at least on a signal from at least one movement sensor and/or orientation sensor sensing a corresponding movement of some upper body part of the user.

6. The method of claim 4, wherein drinking is distinguished from medication intake by a different tilting angle of the user's head relative to the surface of the earth, based at least on the signal from the at least one movement sensor and/or the at least one orientation sensor.

7. The method of claim 1, wherein the further sensor signals comprise physiological signals indicative of a physiological property of the user collected by at least one physiological sensor, and wherein, in the step of analyzing, an event of drinking or, respectively, eating or, respectively, medication intake and/or which kind of liquid or, respectively, food or, respectively, medication the user is taking is further determined based on the physiological property.

8. The method of claim 7, wherein the physiological signals are indicative of at least one of a cardiovascular property, a body fluid analyte level, and a body temperature.

9. The method of claim 7, wherein an event of water intake and/or an amount of water ingested by the user during the drinking or, respectively, eating or, respectively, medication intake is estimated based on the physiological property.

10. The method of claim 1, wherein at least one of the machine learning algorithms is based on an artificial neural network; the input data set for the neural network is provided at a respective time point by the sensor data collected over a predetermined period of time up to this time point; the output data set for the respective time point includes a frequency or number of detected liquid and/or food and/or medication intakes as well as a respective or an overall amount of the liquid and/or food and/or medication ingested by the user and/or a duration of the detected liquid and/or food and/or medication intakes; wherein the learning phase is implemented by a supervised learning, in which the algorithm is trained using a database of input sensor data with labeled output data sets; or, alternatively, by an unsupervised learning in an environment with more information available and/or by a reinforcement learning or deep reinforcement learning.

11. The method of claim 10, wherein the artificial neural network is a deep neural network including at least one hidden layer.

12. The method of claim 1, wherein, in the step of analyzing, a temporal dynamic behavior of the drinking and/or eating and/or medication intake process, is incorporated by applying at least one of the following machine learning methods: a Hidden Markow Model; a recurrent neural network.

13. The method of claim 1, wherein, in the step of analyzing, a dehydration risk of the hearing device user is estimated depending on the determined values of the amount and of a frequency of the user's liquid intake; and the generated output is configured depending on the estimated dehydration risk, so as to counsel the user to ingest a lacking amount of liquid and/or so as to inform the user and/or a person close to the user and/or a health care professional about the estimated dehydration risk.

14. The method of claim 1, wherein an interactive user interface is provided in the hearing system; and the steps of analyzing and/or generating an output are supplemented by an interaction with the user via the interactive user interface, wherein the user is enabled to input additional information pertaining to his liquid and/or food and/or medication intake.

15. The method of claim 14, wherein additional information about a need to take a predetermined medication is stored in the hearing system; when a fluid intake of the user is detected, the output is generated depending on this additional information and comprises questioning the user, via the interactive user interface, whether he has taken the predetermined medication; and the user's response to this question via the interactive user interface is stored in the hearing system and/or transmitted to the user and/or a person close to the user and/or a health care professional, so as to verify that the user has taken the predetermined medication.

16. The method of claim 1, wherein, in the step of generating an output, depending on the determined frequency and amount of the liquid or, respectively, food or, respectively, medication ingested by the user, an output configured such as to enhance the user's desire to drink or, respectively, to eat something, or, respectively, to take the medication is generated by augmented reality means in the hearing system.

17. The method of one of claim 1, wherein when detecting that the user is drinking or, respectively, eating something or, respectively, taking medication and/or upon detecting which kind of liquid or, respectively, food or, respectively, medication the user is taking, an output configured such as to enhance the user's experience of drinking or, respectively, eating or, respectively, taking medication is generated by augmented reality means in the hearing system depending on this detection.

18. A computer-readable medium, in which a computer program is stored for detecting and quantifying a liquid and/or food and/or medication intake of a user wearing a hearing device which comprises at least one microphone, which program, when being executed by a processor, is adapted to carry out the steps of the method of claim 1.

19. A hearing device worn by a hearing device user, comprising: a microphone; a processor for processing a signal from the microphone; a sound output device for outputting the processed signal to an ear of the hearing device user; wherein the hearing device is adapted for performing the method of claim 1.
Description



RELATED APPLICATIONS

[0001] The present application claims priority to EP Patent Application No. EP21163914, filed Mar. 22, 2021, the contents of which are hereby incorporated by reference in their entirety.

BACKGROUND INFORMATION

[0002] Hearing devices are generally small and complex devices. Hearing devices can include a processor, microphone, an integrated loudspeaker as a sound output device, memory, housing, and other electronical and mechanical components. Some example hearing devices are Behind-The-Ear (BTE), Receiver-In-Canal (RIC), In-The-Ear (ITE), Completely-In-Canal (CIC), and Invisible-In-The-Canal (IIC) devices. A user can prefer one of these hearing devices compared to another device based on hearing loss, aesthetic preferences, lifestyle needs, and budget.

[0003] Many of the elderly often do not drink sufficiently, because of several reasons, e. g. lack of thirst, lack of remembering that they shall drink. Subsequently, they have a high risk to dehydrate, which might cause problems such as dizziness, a condition of mental decline etc.

[0004] Hearing devices can be used to monitor the drinking behavior of their users and to counsel the users by helping them to develop a healthy hydrated lifestyle. To this end, the hearing system shall detect automatically the fluid intake of a user.

[0005] In this context, for example, DE 10 2018 204 695 A1 discloses using hearing systems, which comprise hearing devices or other devices worn by the user on his head, such as earphones or headsets, for health monitoring. The document proposes recognizing a large variety of specific symptoms or irregularities in the breathing, speaking, sleeping, walking, chewing, swallowing or other body functions of the user by means of sensors such as microphones, vibration sensors or accelerometers e. g. integrated in the hearing device worn by the user. Use cases address symptoms of various diseases such as Parkinson, traumatic brain injury, tics, bruxism, hiccup, allergic reactions, coughing fits and many more. Inter alia, swallowing is proposed to be recognized by sensing a typical sound moving downwards away from the hearing device with a microphone, in combination with multiple characteristic muscle contractions sensed by a vibration sensor in the ear canal. It is also mentioned that a single gulp may be interpreted as swallowing saliva, but possibly also as medication intake, which may be used to monitor the latter. It is also briefly mentioned that a drinking process might be detected as more or less regular gulps by a microphone or a vibration sensor, and that the user may also be reminded to drink. However, no detailed evaluation algorithms are described for these different use cases.

[0006] A specific method of detecting a dehydration by measuring a water level in the brain of the hearing device users, or a bio impedance, is disclosed in US 2017/0289704 A1, based on the hypothesis that the magnetic/electric conductance in the head varies with the relative water level in the head, at least on a short term. If the measured water level is below a predefined threshold, a reminder signal reminding the user to drink something is generated by at least one of the hearing devices and sent to the user via e.g. a smartphone or a smartwatch or as a direct audio reminder, e.g. a speech message.

[0007] Another problem known in the art is that the elderly often lose their appetite, whereas bad eating habits may increase obesity or the risk of heart diseases.

[0008] A further problem known in the art is that the elderly often forget to take their prescribed medication or intentionally abstain from taking their medication by underestimating an underlying health risk.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] Below, embodiments of the present invention are described in more detail with reference to the attached drawings.

[0010] FIG. 1 schematically shows a hearing system according to an embodiment.

[0011] FIG. 2 shows a flow diagram of a method according to an embodiment for detecting and quantifying a liquid and/or food and/or medication intake of a user wearing a hearing device of the hearing system of FIG. 1.

[0012] The reference symbols used in the drawings, and their meanings, are listed in summary form in the list of reference symbols.

DETAILED DESCRIPTION

[0013] Described herein are a method, a computer program and a computer-readable medium for detecting and quantifying a liquid and/or food and/or medication intake of a user wearing a hearing device which comprises at least one microphone. Furthermore, the embodiments described herein relate to a hearing system which comprises at least one hearing device of this kind and optionally a connected user device, such as a smartphone and/or a smartwatch.

[0014] It is a feature described herein to provide a reliable and robust method and system for detecting and/or quantifying a liquid and/or food and/or medication intake of a user wearing a hearing device. It is a further feature to provide a reliable and robust method of detecting dehydration and/or monitoring a correct intake of medication. It is a further feature to enhance the user's experience of drinking or, respectively, eating, or, respectively, medication intake by augmented reality means.

[0015] A first aspect relates to a method for detecting and quantifying a liquid and/or food and/or medication intake of a user wearing a hearing device which comprises at least one microphone.

[0016] The method may be a computer-implemented method, which may be performed automatically in the hearing device and/or in another device of a hearing system, part of which the user's hearing device is. As described in more detail herein below, the hearing system may consist of the hearing device alone, or may be a binaural hearing system comprising two hearing devices worn by the same user, or may comprise the hearing device and a remote device portable by the user, such as a smartphone or smartwatch, connected to the hearing device. One or both of the hearing devices may be worn on and/or in an ear of the user. A hearing device may be a hearing aid, which may be adapted for compensating a hearing loss of the user. Also a cochlear implant may be a hearing device.

[0017] According to an embodiment, the method comprises receiving an audio signal from the at least one microphone and/or a sensor signal from at least one further sensor. As described in more detail herein below, the further sensors may include any types of physical or physiological sensors--e.g. movement sensors, such as accelerometers, and/or optical sensors, such as cameras, and/or physiological sensors such as (body) temperature sensors and/or heart rate sensors and/or photoplethysmography (PPG) sensors and/or bioelectric sensors (e.g., electrocardiography (ECG) sensors, electroencephalogram (EEG) sensors, electrooculography (EOG) sensors, etc.) and/or blood analyte sensors (e.g., optical sensors or radio frequency (RF) sensors sensitive to specific frequencies of an analyte in the blood such as, e.g., glucose, water, hemoglobin, etc., and/or voltametric sensors configured for voltametric measurements indicating a presence of an electroactive substance, e.g., a drug or a drug component, contained in a body fluid, e.g. sweat.), etc.--integrated in the hearing device or possibly also in a connected user device, such as a smartphone or a smartwatch.

[0018] According to an embodiment, the method further comprises collecting and analyzing the received audio signal and/or further sensor signals so as to detect each time the user drinks and/or takes medication and/or eats something, wherein drinking and/or medication intake is distinguished from eating and/or wherein drinking is distinguished from medication intake. For example, drinking may be distinguished from eating and/or medication intake may be distinguished from eating and/or drinking may be distinguished from medication intake. As another example, drinking (irrespective whether the drinking may be accompanied by a medication intake or whether the drinking occurs without a medication intake) may be distinguished from eating. As another example, a medication intake (irrespective whether the medication intake may be accompanied by drinking or whether the medication intake occurs without drinking) may be distinguished from eating. As a further example, drinking may be distinguished from a medication intake, e.g., drinking without medication intake may be distinguished from drinking with medication intake. As yet another example, drinking may be distinguished from a medication intake and drinking may also be distinguished from eating and medication intake may also be distinguished from eating. The analysis further includes determining values indicative of how often this is detected and/or a respective amount of liquid and/or food and/or medication ingested by the user.

[0019] The determined values may then be stored in the hearing system and, based on the stored values, a predetermined type of output is generated, in particular, to the user and/or to a third party, such as a person close to the user and/or a health care professional. The predetermined type of output may, for example, include one or more of the following: a textual, acoustical, graphical, vibrational output, as well as an output generated using augmented reality and/or an interactive user interface etc. The specific choice of a suitable output may, for instance, depend on the specific goals and use cases addressed in the different embodiments as described herein below.

[0020] According to an embodiment, the above-mentioned step of analyzing includes applying one or more machine learning algorithms, subsequently and/or in parallel, in the hearing device or in the hearing system or in a remote server or cloud connected to the hearing device or system. By applying a suitable machine learning algorithm, a most reliable and/or robust method of detecting and quantifying the liquid and/or food intake of the user may be provided.

[0021] This may, for example, be achieved due to the capability of a machine learning algorithm to automatically adapt the analysis to individual drinking and/or eating habits and/or medication intake habits of the user, which may also take into account that habits of one and the same user are often changing with time, e.g. vary from season to season or may even rapidly change with a changing life style when the user moves to another country, changes his job, marries etc. Furthermore, the reliability and robustness may also be increased by a machine learning algorithm which is configured to automatically supplement the analysis by adding new sensor-based recognition criteria of the user's drinking and/or eating and/or medication intake behavior, after having started with some basic or initial recognition criteria. In the course of this, for example, sensor signals of additional sensors provided in the hearing system may be automatically included in the analysis algorithm by machine learning.

[0022] Therefore, according to an embodiment, at least one of the machine learning algorithms may be applied in its training phase so as to learn user-specific manners of drinking and/or eating and/or taking medication, in order to reliably detect whenever the user drinks or eats something or takes medication and to distinguish eating from drinking and/or medication intake and/or to distinguish drinking from medication intake, e.g., to distinguish eating from drinking and/or to distinguish eating from medication intake and/or to distinguish drinking from medication intake. For example, the machine learning algorithms may be applied to learn that--or how exactly--the user typically elevates his head slightly to take a gulp from a bottle or a sip from a glass or cup of liquid; or to learn that--or how exactly--the user is typically lowering his head when taking a bite when eating; or to learn that the user is typically raising and/or tilting his head even more when swallowing a medication in conjunction with drinking as compared to during regular drinking without a medication intake. The newly learned user-specific manners are then incorporated in the future analysis step. Further details and examples are described herein below.

[0023] Any method features described herein with respect to a drinking process (liquid intake) may also be applied by analogy to detecting and quantifying an eating process (food intake) of the hearing device user and/or by analogy to detecting and quantifying a medication intake process (e.g., swallowing a pill or medical syrup which may be in conjunction with a drinking process such as drinking water, or in the absence of such a drinking process) of the hearing device user. The detection algorithm is then correspondingly adapted to use and interpret the collected sensor signals so as to distinguish eating from drinking (e.g., drinking with or without an accompanying medication intake), and/or drinking is distinguished from medication intake and/or eating is distinguished from medication intake, and so as to learn the respective specific recognition and quantification criteria.

[0024] Particularly important examples of respective output generation in either case are described in more detail herein below. They provide, for example, novel and/or improved methods of dehydration detection, verification of a medication intake, enhancement of a liquid intake desire, more particularly a water intake desire as compared to more unhealthy drinking habits, and/or enhancement of an eating or drinking or medication intake experience of the hearing device user, so as to raise the user's life quality and support a healthy living style.

[0025] According to an embodiment, in the step of analyzing, two or more different (in particular, subsequent) phases of drinking or, respectively, eating or, respectively, medication intake, are distinguished in the course of detecting a liquid and/or food and/or medication intake of the user. Thereby, again, the robustness and reliability of the method may be increased.

[0026] Since different sensors might be better suited to detect the different phases, the analysis of the different phases may be based on signals from at least partly different sensors or groups of sensors. On the other hand, the analysis of different phases may also be performed by at least partly different machine learning algorithms. In the case of subsequent phases, the respective algorithms may also be applied in a corresponding subsequent manner.

[0027] In this embodiment, the different phases of drinking may, for example, comprise one or more of the following phases (whereas the different phases of eating may be defined in a similar manner by analogy):

[0028] A phase of drinking (e.g., drinking with or without an accompanying medication intake) and/or medication intake may be bringing a source of liquid in contact with the mouth. This may, for example, be grabbing a glass or bottle or another vessel containing the liquid. Alternatively, this may also be lowering the head so as to reach a jet of water, e. g. emanating from a water tap, with the mouth. Detecting this phase may be based, inter alia, on a signal from at least one movement sensor sensing a corresponding movement of some upper body part of the user and/or an orientation sensor sensing a corresponding change of orientation of the body part relative to the surface of the earth. Suitable movement sensors may include movement sensors, e.g. accelerometers, provided at the user's head, e.g. in the hearing device, and/or at the user's wrist or finger, e.g. in a wrist band, smartwatch or finger ring, and/or at another upper body part of the user. An accelerometer may also be employed as an orientation sensor, e.g. with respect to determining an orientation relative to the direction of the gravitational force acting perpendicularly to the earth's surface.

[0029] Another phase of drinking and/or medication intake may be tilting of the user's head. E.g., the user may tilt his head from a more downward (e.g., toward the earth's surface) directed direction (e.g., when eating, or when performing another activity such as reading) to a more upward directed direction (e.g., to facilitate the act of swallowing the liquid and/or medication). Such a user behavior may be detected based at least on a signal from at least one movement sensor sensing a corresponding movement of the head of the user and/or based at least on a signal from at least one orientation sensor sensing a corresponding orientation of the head of the user relative to the surface of the earth, e.g. an accelerometer.

[0030] Another phase of drinking and/or medication intake may be gulping or sipping the liquid and/or swallowing the medication. Detecting this phase may be based, inter alia, on a signal from the at least one microphone, e.g. in a hearing device. In some instances, determining a number of sounds related to the drinking and/or medication intake, e.g. a number of temporally separated gulping, sipping, and/or swallowing sounds, can be employed to quantify the fluid intake and/or medication intake. Detecting this phase may also be based, e.g., on a signal from at least one movement sensor sensing a corresponding movement of the user's throat, head and/or breast. Suitable movement sensors may include at least one accelerometer provided in the hearing device and/or on a head, neck or breast of the user. Quantifying the fluid intake and/or medication intake may also be based on determining a number of those movements.

[0031] Another phase of drinking and/or medication intake may be removing the mouth from the source of liquid. This may be, for example, putting the glass, bottle or another vessel back on the table. Alternatively, this may also be raising the head away from the jet of water. Detecting this phase may be based, inter alia, on a signal from the at least one microphone and/or on a signal from at least one movement sensor sensing a corresponding movement of some upper body part of the user. Suitable movement sensors may include movement sensors and/or orientation sensors, such as accelerometers, provided at the user's head, e.g. in the hearing device, and/or at the user's wrist or finger, e.g. in a wrist band, smart watch or finger ring, and/or at another upper body part of the user.

[0032] Another phase of medication intake may be bringing a medication in contact with the mouth and/or inserting the medication into the mouth, e.g., before a source of liquid is brought in contact with the mouth. Detecting this phase may also be based, e.g., on a signal from at least one movement sensor and/or orientation sensor sensing, e.g. at least one accelerometer in the hearing device and/or at the user's wrist or finger and/or at another upper body part of the user. E.g., the medication may be a pill, a syrup, a droplet and/or the like. E.g., a source of liquid may be subsequently brought in contact with the mouth and swallowed to facilitate a swallowing of the medication. In some instances, medication intake may be distinguished from drinking, e.g., from a regular drinking activity not involving a medication intake, by detecting this phase.

[0033] In some instances, drinking, e.g., a drinking activity not involving a medication intake, may be distinguished from a medication intake, e.g., a drinking activity involving a medication intake, by a different tilting angle of the user's head relative to the surface of the earth. To illustrate, when swallowing a medication, the user may tilt his head to the back. E.g. the tilting angle may be larger during taking of medication as compared to a tilting angle occurring during drinking. The tilting angle of the head may be detected, e.g., based at least on a signal from at least one movement sensor and/or at least one orientation sensor.

[0034] Another phase of drinking and/or medication intake and/or eating may be an impact on a physiological property of the user caused by the drinking and/or medication intake and/or eating. E.g., drinking liquid or eating nutrition may alter a cardiovascular property of the user, e.g. a heart rate, and/or change a blood analyte level, e.g., an amount of glucose and/or lipid and/or water contained in the blood, and/or can also have an impact on the body temperature (e.g., after consuming cold or hot liquid or food and/or after consuming a large amount of food or liquid). As another example, a medication intake can also alter a cardiovascular property of the user, e.g. a blood pressure and/or heart rate, and/or can also change a blood analyte level, e.g., a drug or drug component of the medication contained in the blood. A concentration of the blood analyte level, e.g., glucose, water, lipid, a drug component, etc., may also be determined, e.g. to quantify the ingested liquid and/or medication and/or food. Detecting this phase may be based on further sensor signals, which may comprise physiological signals indicative of a physiological property of the user, which may be collected by at least one physiological sensor. E.g., the physiological sensor(s) may be included in the hearing device and/or they may be worn at another body portion of the user than the ear, for instance at a wrist (e.g., included in a smartwatch) or a finger (e.g., included in a finger ring) and/or any other upper or lower body portion suitable to detect a physiological property of the user. Correspondingly, an event of drinking or, respectively, eating or, respectively, medication intake and/or which kind of liquid or, respectively, food or, respectively, medication the user is taking may be further determined based on the physiological property.

[0035] A physiological property, as determined by a physiological sensor in a physiological signal, may comprise any measurable biological characteristic of a human being, e.g., the user, such as a vital sign and/or a physiological property of the human being. The physiological property may be measured by detecting any form of energy and/or matter intrinsic to the human being and/or emitted from the human being and/or caused by the human being. In some implementations, the physiological signals are indicative of at least one of a cardiovascular property (e.g., a heart rate and/or a blood pressure and/or a blood oxygen saturation level, etc.), a body fluid analyte level (e.g., a concentration of an analyte, such as hemoglobin, lipid, glucose, water, a drug component, etc., in a body fluid, e.g., in blood and/or in sweat, etc.), and a body temperature.

[0036] In some implementations, a physiological sensor configured to provide physiological signals indicative of a physiological property comprises a light source configured to emit light through a skin of the user and an optical detector for detecting a reflected and/or scattered part of the light, wherein the physiological signals comprise information about the detected light. In particular, the physiological signals may comprise information about a blood flow, e.g., blood volume changes, a heart rate, a blood pressure, a blood oxygen saturation level, etc., indicated in an photoplethysmography (PPG) measurement. The physiological signals may also comprise information about a blood analyte level, e.g. by detecting an absorption and/or emission spectrum of specific molecules contained in the blood, e.g., water and/or lipids and/or glucose and/or a drug component. In some implementations, the physiological sensor comprises an electrode configured to detect an electric signal induced through a skin of the user, wherein the physiological signals comprise information about the electric signal. In particular, the physiological signals may comprise information about a brain activity indicated in an electroencephalogram (EEG) measurement and/or information about a heart activity indicated in an electrocardiogram (ECG) measurement and/or information about an eye activity indicated in an electrooculography (EOG) measurement. In some implementations, the physiological sensor comprises a temperature sensor configured to detect a body temperature of the user, wherein the physiological signals comprise information about the body temperature. In some implementations, the physiological sensor comprises a radio frequency (RF) sensor configured to send energy at a radio frequency into tissue of the user and to detect a reflection and/or absorption thereof, for instance to determine a blood analyte level, e.g. an amount and/or density of certain molecules. In some implementations, the physiological sensor comprises a voltametric sensor configured to detect a voltametric property indicating a presence of an analyte as an electroactive substance, e.g., a drug or a drug component, which may be contained in a body fluid, e.g. sweat.

[0037] Detecting the physiological property may also be employed in the detection of liquid intake to determine a type of the ingested liquid, e.g., to distinguish an event, in which the user is drinking (pure) water, from another event, in which the user is drinking a liquid different from (pure) water or containing less water. To illustrate, the user may have rather unhealthy drinking habits and may drink not enough water but rather prefers other beverages which are dehydrating (e.g., coffee, alcoholic beverages, etc.) or which contain a considerable amount of sugar. E.g., detecting a blood analyte level such as glucose and/or water can give a direct indication whether a drinking event is related to water consumption or another type of liquid and/or detecting a cardiovascular property can give an indirect indication thereof (e.g., consuming coffee can increase the blood pressure). Determining the type of the ingested liquid, e.g., water as compared to a different liquid type, can be applied in a dehydration detection as described herein above and below.

[0038] Correspondingly, the detection algorithm might also consist of a plurality of parts, e.g. two or three or more parts, which are processed in a timely sequence, according to the above described two or three or more drinking phases. Furthermore, instead of the above-mentioned exemplary two or three or more different phases, another number and/or types of different phases may be identifiable in the course of detecting a drinking or eating or medication intake process of the user, for example one or two or three or four or more subsequent phases.

[0039] According to an embodiment, at least one of the machine learning algorithms applied in the analysis step is based on an artificial neural network. The input data set for the neural network may be provided at a respective time point by the sensor data collected over a predetermined period of time up to this time point. The output data set for the respective time point may include a frequency or number of detected liquid and/or food and/or medication intakes and/or a respective or an overall amount of the liquid and/or food and/or medication ingested by the user and/or a duration of the detected liquid and/or food and/or medication intakes. The artificial neural network may, for example, be a deep neural network including one or multiple hidden layers.

[0040] The training phase of the (deep) neural network may be implemented by a supervised learning, in which the algorithm is trained using a database of labeled input sensor data with corresponding output data sets. A suitable database of training data, working well for the above-described analysis, contains, for example, recorded swallowing or, respectively, gulping sounds of a large number of people--in this example approximately 1000 adults. Alternatively or additionally, representative sound recordings of drinking, swallowing, gulping, chewing people available in the Internet may also be used as training data.

[0041] Furthermore, the training phase may be implemented by an unsupervised learning in an environment with still more information available (Internet of Things), e.g. by using data transmitted from smart devices related to drinking and eating and medication intake, such as smart water bottles known in the art, and/or additional information within a smart home and/or input from the user via an interactive user interface as described herein below. The training phase may also comprise reinforcement learning or deep reinforcement learning.

[0042] Moreover, the detection and/or quantification algorithm mentioned herein above and below might be a (deep) neural network, a statistical approach known as a multivariate analysis of variance (Manova), Support Vector Machines (SVM) or any other machine learning algorithm, pattern recognition algorithm or statistical approach. The detection algorithm might consist of three parts, which are processed in a timely sequence, according to the three drinking phases described above. This is, however, not necessary. Instead of processing the multiple phases, e.g. two or three or four phases or more, in a temporal manner and separately, the drinking (and, by analogy, eating) procedure might also be detected without distinguishing any phases. A deep learning algorithm might find other criteria during a training phase.

[0043] For instance, instead of distinguishing the above-described multiple phases, (e.g., two or three or any other number of phases) of drinking and/or medication intake a priori in the step of analyzing, a temporal dynamic behavior of the drinking process and/or medication intaking process (in other words, the "history" of the drinking process and/or medication intaking process), may be incorporated by applying a Hidden Markow Model (HMM) or a recurrent neural network (RNN), such as a long short-term memory (LSTM) or a gated recurrent unit (GRU).

[0044] An artificial neural network or other machine learning algorithms as described herein may be implemented directly in the hearing device, e.g. in a chip with a high degree of parallel processing, being integrated in the hearing device to this end. Alternatively or additionally, this may also be implemented in any other device of the hearing system. Since the analysis steps of drinking or medication intake or eating detection and quantification, or further items such as dehydration detection described herein above and below, must not necessarily be solved in real time and must not necessarily be all processed within the hearing device itself, diverse and more complex algorithms and processes than in the cited prior art are implementable.

[0045] In any of the embodiments described herein, the hearing device or system may be configured to combine and analyse signals of several sensors from the following exemplary list of sensors. Corresponding detection criteria (as mentioned for each sensor) are, in general, user-specific and/or use-case specific. This may be learned by a suitable machine learning algorithm, as described herein above and in the following. A combination of several sensors may, for instance, be suitable to detect all the above-mentioned phases of the drinking (or eating) procedure:

[0046] Accelerometers (or gyroscopes) in the hearing device may be used to detect when the user elevates his head (e.g., slightly) or tilts his head (e.g., within a certain range of tilting angles) to take a sip. In contrast, to detect eating, the detection algorithm may be based on the fact that the user is rather lowering his head when taking a bite. As another example, to detect medication intake, the detection algorithm may be based on the fact that the user elevates his head even higher and/or tilts his head even more to the back when taking medication.

[0047] In addition, the user might wear a smartwatch or a wristband around his wrist or a finger ring at one of his fingers with integrated movement sensors. While the user brings the glass to his mouth and while drinking, his arm follows a specific gesture, which may be recognized by these sensors so as to indicate the possible action of drinking. This movement pattern might also be used to distinguish between drinking, eating and other possible actions of the user.

[0048] In addition, a drinking bottle might incorporate sensors and be connected to the smart home (also known as smart water bottles). The hearing device or system may be connected to the smart home as well and receive those signals to incorporate them into the detection process. Alternatively, those data may be collected in a cloud or a smart home device to compute the algorithms.

[0049] Physiological sensors might be incorporated to detect intake of liquid or medication or food: a physiological property of the user as detected by such a sensor, e.g., a heart rate, a blood pressure level, a blood analyte level, etc., can be influenced by the drinking, eating and/or medication intake. E.g., the pulse of the heart can be influenced and altered by generally swallowing liquid or solid nutrition. This effect may also strongly vary depending on the type and quantity of liquid or food taken by the user, which might be incorporated in the analysis algorithm to determine the respective values or detect the difference. Further, certain types of medication, even medication unrelated to a cardiovascular treatment, can have a secondary effect to alter a blood pressure of the user after the medication intake, e.g. by raising or lowering the blood pressure to a certain degree. Detecting the blood pressure can thus be used to verify and/or determine (qualitatively and/or quantitatively) an intake of medication. Further, detecting a blood analyte level by a blood analyte sensor (as described above) can give indications of a consumed liquid and/or medication and/or nutrition.

[0050] Other sensors, such as electroencephalographic (EEG) sensors, eye gaze or a camera within the smart home or integrated into glasses might be incorporated for detection.

[0051] In addition, other sensors might be applied, such as:

[0052] Acoustic sensors: the microphones within the hearing device or microphones in the ear canal might detect the gulping sound of a liquid and/or the swallowing sound of medication or might improve the detection rate by recognizing and/or excluding other actions, such as own-voice or eating, and/or by recognizing different actions of drinking and/or medication intake and/or eating allowing to distinguish between those actions.

[0053] Voice Pickup sensors (VPUs) might be used to improve the detection of gulping and/or swallowing of medication and distinguish it from eating, and vice versa.

[0054] According to an embodiment, in the step of analyzing, a dehydration risk of the hearing device user is estimated depending on the determined values of the amount and/or of a frequency (i. e. of how often he has drunk) of the user's liquid intake. The risk of dehydration may be estimated depending on the drinking behavior, e.g. the determined amount of liquid intake is compared with a predetermined value "Dmin" which describes the minimum amount of drinking the user needs to take. This value can be a fixed number or might be calculated depending on the environmental conditions, such as the ambient temperature, on the activity behavior of the user, e.g. walking or sitting, and/or on medical conditions, such as weight etc. The value "Dmin" can be adapted by the user via the interactive user interface described below and/or by his doctor. Determining a type of the ingested liquid, e.g., water as compared to a different liquid type, as described above, may be employed to further improve the estimation of a dehydration risk. For instance, the amount of liquid intake may be only determined for an amount of (pure) water intake and/or the determined amount of liquid intake may be corrected with respect to a number of events in which (pure) water has been ingested as compared to a number of events in which a different liquid type has been ingested and/or a number of events in which a dehydrating liquid type (e.g., coffee) has been ingested. As another example, the amount of liquid intake may be determined by weighting each drinking event with respect to the type of the ingested liquid, wherein (pure) water is associated with the largest weight. The weight can thus be indicative of a contribution of a certain type of an ingested fluid to a hydration of the user and/or an impact of the liquid type on the dehydration risk. E.g., a liquid type containing sugar may be associated with a smaller weight than (pure) water. A dehydrating liquid type may be associated with an even smaller weight.

[0055] In this embodiment, the generated output may be configured depending on the estimated dehydration risk so as to counsel the user to ingest a lacking amount of liquid and/or so as to inform the user and/or a person close to the user and/or a health care professional about the estimated dehydration risk. Counselling the user may be implemented e.g. using a smartphone app and/or acoustically via an integrated loudspeaker of the hearing device and/or via vibration on devices such as wristband, smartwatch or finger ring, and/or via augmented reality using the user's glasses to generate a virtual image of a glass or bottle.

[0056] Alternatively, instead of a dehydration system design where users are admonished to drink more, the system design may enhance the desire to drink when dehydration is detected. This type of output depending on the estimated dehydration risk is described in more detail further below.

[0057] According to an embodiment, an interactive user interface is provided in the hearing system and the steps of analyzing and/or generating an output are supplemented by an interaction with the user via the interactive user interface, wherein the user is enabled to input additional information pertaining to his liquid and/or food and/or medication intake.

[0058] For example, such an interface may be configured such as to enable the user to manually enter the data for drinking behaviour and/or to correct and/or specify the liquid intake. It might further provide the user with a possibility to add comments about subjective descriptions and ratings about his health state, mental condition, feelings, thirst, appetite, social condition (e.g. alone or with company), cause of fluid intake, intake of medication, etc. Thereby, for example, the hearing system might ask the user proactively whether he has taken medication right after the system has detected fluid intake. The proactive questioning might be during a specific time predefined and stored in the hearing system, such as the morning, the evening, just before or after taking a meal, which, in turn, might be determined using the analysis algorithms described herein. This proactive question shall help the user to keep track about the intake of medication.

[0059] The interactive user interface might also help to improve the accuracy of the detection in general or within a learning phase of the analysis algorithm. It may also serve for a more elaborated therapy.

[0060] The interactive user interface might comprise one or more of the following types of user interface:

[0061] an app on a smartphone, smartwatch or another connected user device, which provides a correction mode showing e.g. the detected gulps right after detection.

[0062] An entry mode provided in the hearing device (e.g. by voice recognition or via a button) or on a connected user device, configured to enable the user to specify the liquid and/or food and/or medication intake.

[0063] A conversational user interface incorporated in the hearing device(s).

[0064] According to an embodiment, additional information about a need to take a predetermined medication (e. g. on a regular basis) is stored in the hearing system. In this embodiment, when a fluid intake of the user is detected, the output is generated depending on this additional information and comprises questioning the user, via the interactive user interface, whether he has taken the predetermined medication. The user's response to this question via the interactive user interface may then be, for example, stored in the hearing system and/or transmitted to the user and/or to a third person close to the user and/or a health care professional, so as to verify that the user has taken the predetermined medication.

[0065] Alternatively or additionally to questioning the user, detecting the intake of medication may also be based on the fluid intake detection in combination with the analysis of head movements of the user, as described above. This may also be implemented by the sensor-based machine learning algorithm as described herein above and below.

[0066] According to an embodiment, in the step of generating an output depending on the determined frequency and amount of the liquid or, respectively, food or, respectively, medication ingested by the user, an output configured such as to enhance the user's desire to drink or, respectively, eat something, or, respectively, take his medication is generated by augmented reality means in the hearing system. In particular, this may be an output generated upon estimating a dehydration risk in the embodiment described further above, as alternative to an output which admonishes the user to drink more. Such an output may also be selectively generated for the enhancement of the drinking or medication intake or eating desire, e.g. depending on a determined dehydration risk and/or failure of medication intake and/or poor nutrition. For instance, an output configured to enhance the user's desire to drink may only be generated in situations in which the user shall be motivated for the medication intake in conjunction with the drinking (e.g., in situations in which the dehydration risk is low, but a risk associated with a lack of medication intake has been determined as rather high), but not during regular drinking activities.

[0067] In this embodiment, an output configured to enhance the user's desire to drink or eat something, or to take (the prescribed) medication, may, for example, be reached with suitable sound, smell and video effects and methods. This may include augmenting specific sounds or music associated by the user with drinking or suitable to enhance a drinking experience. This may also be achieved by presenting specific sounds or music associated by the user with eating and/or which can lead to an enhanced tasting-experience. The suitable effects may be very user-specific and be determined as additional information by machine learning in the analysis step of the present method, and/or indicated by the user via the above-described interactive user interface.

[0068] The sounds or optical effects to be presented in order to enhance the user's desire to drink, and/or to take medication, e.g., during the drink, might, for example, strongly differ dependent on the drink to be taken and on the user's cultural background and music preference, which may be evaluated by the hearing system in the analysis step. Same also applies to flavor effects. Furthermore, it is also known that there may be a strong user-specific correlation between smell and sound, for instance bound to certain situations in life remembered by the user, so that a suitable combination of smell and sound may lead to a still stronger desire of the user to drink or, respectively, eat something special in this embodiment. All this also applies to video in addition to or instead of sound effects.

[0069] Specifically, one or more of the following outputs may be generated by the hearing system in this embodiment:

[0070] An already existing drinking glass or bottle in the field of vision of the user might be augmented visually with augmenting colors representing the fluid, e.g. using smart glasses connected to the hearing device. Similarly, a package or bottle containing medication and/or a storage space used to store the medication might be visually augmented,

[0071] the hearing system might display sounds, such as water drops, sound which represents filling a glass with water or the splashing sound of opening a beer, etc., which might augment the desire to drink.

[0072] For elderly, where all senses decline, the enhancement of smell might increase the experience as well. Depending on what the user is drinking, the smart home connected to the hearing device might pump smell into the room according to the drink. E.g. when the content of the bottle or glass next to the user consists of tea, the smell consists of a strong tea smell, when the glass contains orange juice, the smell consists of fresh oranges.

[0073] Also a video might be played representing a drinking experience and the corresponding sound. If the user is watching TV (on a TV set connected to the hearing device) for a long time, videos without the commercial content can be shown in between to enhance the user's desire to drink or eat something or to take the medication needed for the user's health.

[0074] According to an embodiment, when detecting that the user is drinking or, respectively, eating something or, respectively, taking medication and/or upon detecting which kind of liquid or, respectively, food or, respectively, medication the user is taking, an output configured such as to enhance the user's experience of drinking or, respectively, eating or, respectively, taking medication is generated by augmented reality means in the hearing system depending on this (in particular multisensory) detection. This shall be reached with similar methods as described for the previous embodiment: with sound, smell and video. Furthermore, a suitable output may be determined in a similar user-specific manner as described above.

[0075] Furthermore, a similar algorithm as described herein can be applied to monitor any other health related activities and body symptoms, such as e.g. mentioned at the outset.

[0076] The above-described embodiments, where the output is generated such as to enhance the user's desire to ingest some healthy liquid or food or prescribed medication or to enhance the respective eating or drinking or medication intake experience, might be particularly applied to the elder users.

[0077] Thereby, for example, for the elderly who have lost their appetite, the tasting experience may be enriched with multisensory cues and sounds designed such as to enhance the eating or drinking experience. Chewing sounds may be augmented so as to let the meal appear more fresh and/or crisp. In particular, by a suitable sound stimulus presented to the hearing device user, the perceived taste of food may be influenced (e.g. low pitch sounds relate to bitter, salty flavors. High pitch sounds relate to sweet, sour flavors). This may be determined so as to stimulate the consumption of certain types of aliments, which better suit the dietary requirements of the user.

[0078] To support the determination of a suitable output and/or to support the sensor-based analysis algorithm, the user may enter the kind of drink or food or medication via the above-described user interface, or a barcode on the drink or food or medication package may be scanned with the connected user device such as smartphone or a smartwatch.

[0079] By the method described herein, for example, obesity and a risk of heart diseases due to bad eating (or also drinking) habits of the users may be effectively reduced. Further, a health risk associated with an irregular or neglected or omitted medication intake can be effectively reduced.

[0080] Further aspects relate to a computer program for detecting and quantifying a liquid and/or food and/or medication intake of a user wearing a hearing device which comprises at least one microphone, which program, when being executed by a processor, is adapted to carry out the steps of the method as described above and in the following as well as to a computer-readable medium, in which such a computer program is stored.

[0081] For example, the computer program may be executed in a processor of a hearing device, which hearing device, for example, may be carried by the person behind the ear. The computer-readable medium may be a memory of this hearing device. The computer program also may be executed by a processor of a connected user device, such as a smartphone or smartwatch or any other type of mobile device, which may be a part of the hearing system, and the computer-readable medium may be a memory of the connected user device. It also may be that some steps of the method are performed by the hearing device and other steps of the method are performed by the connected user device.

[0082] In general, a computer-readable medium may be a floppy disk, a hard disk, an USB (Universal Serial Bus) storage device, a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable Read Only Memory) or a FLASH memory. A computer-readable medium may also be a data communication network, e.g. the Internet, which allows downloading a program code. The computer-readable medium may be a non-transitory or transitory medium.

[0083] Further aspects relate to a hearing device worn by a hearing device user and to a hearing system comprising such a hearing device, as described herein above and below. The hearing device or, respectively, the hearing system is adapted for performing the method described herein above and below. As already mentioned further above, the hearing system may further include, by way of example, a second hearing device worn by the same user and/or a connected user device, such as a smartphone or smartwatch or other mobile device, or personal computer, used by the same user.

[0084] According to an embodiment, the hearing device comprises: a microphone; a processor for processing a signal from the microphone; a sound output device for outputting the processed signal to an ear of the hearing device user; and, as the case may be, a transceiver for exchanging data with the connected user device and/or with another hearing device worn by the same user.

[0085] It has to be understood that features of the method as described above and in the following may be features of the computer program, the computer-readable medium and the hearing device and the hearing system as described above and in the following, and vice versa.

[0086] These and other aspects will be apparent from and elucidated with reference to the embodiments described hereinafter.

[0087] FIG. 1 schematically shows a hearing system 10 including a hearing device 12 in the form of a behind-the-ear device carried by a hearing device user (not shown) and a connected user device 14, such as a smartphone or a tablet computer or any other body worn device, e.g., a smart watch, a wrist band, or a finger ring. It has to be noted that the hearing device 12 is a specific embodiment and that the method described herein also may be performed with other types of hearing devices, such as in-the-ear devices.

[0088] The hearing device 12 comprises a part 15 behind the ear and a part 16 to be put in the ear canal of the user. The part 15 and the part 16 are connected by a tube 18. In the part 15, at least one microphone 20, a sound processor 22 and a sound output device 24, such as a loudspeaker, are provided. The microphone(s) 20 may acquire environmental sound of the user and may generate a sound signal, the sound processor 22 may amplify the sound signal and the sound output device 24 may generate sound that is guided through the tube 18 and the in-the-ear part 16 into the ear canal of the user.

[0089] The hearing device 12 may comprise a processor 26 which is adapted for adjusting parameters of the sound processor 22 such that an output volume of the sound signal is adjusted based on an input volume. These parameters may be determined by a computer program run in the processor 26. For example, with a knob 28 of the hearing device 12, a user may select a modifier (such as bass, treble, noise suppression, dynamic volume, etc.) and levels and/or values of these modifiers may be selected, from this modifier, an adjustment command may be created and processed as described above and below. In particular, processing parameters may be determined based on the adjustment command and based on this, for example, the frequency dependent gain and the dynamic volume of the sound processor 22 may be changed. All these functions may be implemented as computer programs stored in a memory 30 of the hearing device 12, which computer programs may be executed by the processor 26 and/or sound processor 22.

[0090] The hearing device 12 further comprises a transceiver 32 which may be adapted for wireless data communication with a transceiver 34 of the connected user device 14, which may be a smartphone or tablet computer. It is also possible that the above-mentioned modifiers and their levels and/or values are adjusted with the connected user device 14 and/or that the adjustment command is generated with the connected user device 14. This may be performed with a computer program run in a processor 36 of the connected user device 14 and stored in a memory 38 of the connected user device 14. The computer program may provide a graphical user interface 40 on a display 42 of the connected user device 14.

[0091] For example, for adjusting the modifier, such as volume, the graphical user interface 40 may comprise a control element 44, such as a slider. When the user adjusts the slider, an adjustment command may be generated, which will change the sound processing of the hearing device 12 as described above and below. Alternatively or additionally, the user may adjust the modifier with the hearing device 12 itself, for example via the knob 28.

[0092] The user interface 40 also may comprise an indicator element 46, which, for example, displays a currently determined listening situation.

[0093] The hearing device 12 further comprises at least one further sensor 50, the position of which in the part 15 of the hearing device 12 is to be understood as only symbolical, i.e. the further sensors 50 may also be provided in other parts and at other positions in the hearing device 12. The microphone(s) 20 and the further sensor(s) 50 enable the hearing device 12 or the hearing system 10 to perform a multi-sensor-based analysis for detecting and quantifying a liquid and/or food intake of a user as described herein. The further sensor(s) 50 may be integrated not only in the hearing device 12, but additionally also in the connected user devices such as the connected user device 14 shown in FIG. 1, or a wearable such as a wrist band or smart watch or finger ring, etc. (not shown). As described in more detail herein above and below, the further sensors 50 may include any types of physical or physiological sensors--e.g. movement sensors, such as accelerometers, and/or optical sensors, such as cameras, and/or temperature sensors and/or heart rate sensors etc.

[0094] The hearing system 10 shown in FIG. 1 can be adapted for performing a method for detecting and quantifying a liquid and/or food intake of a user wearing the hearing device 12, as described in more detail herein above and below.

[0095] In some implementations, the hearing system 10 comprises the hearing device 12 without the connected user device 14. E.g., the hearing system 10 may consist of the hearing device 12, or the hearing system 10 may consist of two hearing devices 12 configured two be worn at a respective ear of the user in a binaural configuration. Such a hearing system can also be adapted for performing a method for detecting and quantifying a liquid and/or food intake of a user wearing the hearing device 12 (or two hearing devices), as described in more detail above and below.

[0096] FIG. 2 shows an example for a flow diagram of this method according to an embodiment. The method may be a computer-implemented method performed automatically in the hearing system 10 of FIG. 1.

[0097] In a first step S10 of the method, an audio signal from the at least one microphone 20 and/or a sensor signal from the at least one further sensor 50 is received, e.g. by the sound processor 22 and the processor 26 of the hearing device 12.

[0098] In a second step S20 of the method, the signal(s) received in step S10 are collected and analyzed in the hearing device 12 or in the hearing system 10, so as to detect each time the user drinks and/or eats something and/or takes medication, wherein drinking and/or medication intake is distinguished from eating and/or wherein drinking is distinguished from medication intake, and so as to determine values indicative of how often this is detected and/or a respective amount of liquid and/or food and/or medication ingested by the user. The step of analyzing may include applying one or more machine learning algorithms, which may be implemented on a processor chip integrated in the hearing device 12, or alternatively in the connected user device 14 or somewhere else in the hearing system or in a remote server or cloud connected to it.

[0099] In a third step S30 of the method, the determined values are stored in the hearing system 10 and, based on the stored values, a predetermined type of output as described in more detail herein above and below is generated, for example, to the user and/or to a third party, such as a person close to the user and/or a health care professional.

[0100] In the following, an illustrative method is illustrated, by way of example only, for estimating a risk of dehydration of the user and generating a corresponding output to counsel the user to drink more, and for verifying a medication intake based on the detected liquid intake. It may also be applied in a similar manner for other use cases described in detail herein, such as enhancing the user's desire to drink and/or to take medication or enhancing the eating or drinking or medication taking experience of the user.

[0101] In this example, the solution according to the present method comprises a so-called "sensory part", which may cover the above step S10 and at least a part of step S20 and comprise detecting how many times and how long the user is drinking. It further comprises a so-called "actuator part", which may cover the above step S30 and at least a part of step S20 and comprise storing the detected values into the memory of the hearing device or into the memory of an additional device such as a smartwatch, a wristband, a finger ring, a belt, a cloud etc.; estimating the risk of dehydration and counselling the user and/or, if the user agrees, informing a third party about his drinking behavior, such as a doctor, nurses or caregivers. That is, if the user agrees, his data and values determined in the analysis step of the method are sent to third parties (in addition or alternatively to counseling the user). The sensory part is capable to detect how often and how many swallows or gulps the user takes each time. It is also capable to distinguish between drinking and eating, own-voice and other possible confusions. Three (to four) phases of drinking might be distinguished, such as:

[0102] Phase 1: Grabbing a glass or bottle and bring it to the mouth

[0103] Phase 2: Gulping the liquid

[0104] Phase 3: Putting the glass back on the table. This may also include detecting the sub-phases of (3a) a movement to bring the glass/bottle from the mouth down to the table and (3b) the actual putting of the glass on the table.

[0105] Different sensors might be better suited to detect the different phases, for example:

[0106] Phase 1: movement and/or orientation sensors, e.g. within a wrist band or a smart watch or a finger ring and/or in a hearing device, might be most suited,

[0107] Phase 2: microphones within the hearing device and accelerometers within the ear canal might be most suited,

[0108] Phase 3: movement and/or orientation sensors, e.g. on a wrist band or smart watch or finger ring, and microphones might be most suited.

[0109] The hearing device 12 or system 10 may combine and analyze signals of several sensors to detect all phases of the drinking procedure:

[0110] Accelerometers (or gyroscopes) in the hearing device may be used to detect when the user elevates his head slightly to take a sip. In contrast, when eating, the user is rather lowering his head when taking a bite.

[0111] In addition, the user might wear a smartwatch or a wristband around his wrist or a ring around one of his fingers with integrated movement sensors. While the user brings the glass to his mouth and while drinking, his arm follows a specific gesture, which indicates the possible action of drinking. This movement pattern might also be used to distinguish between drinking and other possible actions of the user.

[0112] In addition, a drinking bottle might incorporate sensors and be connected to the smart home (also known as smart water bottles). The hearing device 12 or system 10, also connected to the smart home, receives those signals to incorporate them into the detection process. Alternatively, those data are collected in a cloud or a smart home device to compute the algorithms.

[0113] Physiological sensors, e.g., heart rate sensors and/or blood pressure sensors and/or temperature sensors, might be incorporated to detect fluid intake: the pulse of the heart and/or blood pressure and/or body temperature can be influenced and altered by swallowing liquid or solid nutrition. Further, the physiological sensors may by employed to determine a type of the ingested fluid, e.g. a blood pressure may be altered to a larger extent by consuming coffee as compared to consuming pure water.

[0114] Other sensors, such as electroencephalographic (EEG) sensors, eye gaze or a camera within the smart home or integrated into glasses might be incorporated for detection

[0115] Physiological sensors, e.g., a sensor sensitive to an analyte in a body fluid such as blood or sweat, might be incorporated to detect a physiological effect of the fluid intake: the quantity of water contained in blood or tissue can be altered by the fluid intake. Further, the physiological sensors may by employed to determine a type of the ingested fluid, e.g. determining a glucose level can indicate whether the ingested fluid is pure water or a sugary drink and/or determining a caffeine level and/or alcohol level can indicate whether the ingested fluid is a dehydrating fluid (e.g., coffee or an alcoholic beverage) rather than a hydrating fluid (e.g., water). Determining the type of the ingested fluid can be employed to render the estimation of a dehydration risk of the user more exact, e.g., by associating each type of the ingested fluid with a specific weight indicative of its contribution to a hydration of the user, as described above.

[0116] In addition, other sensors might be applied, such as:

[0117] Acoustic sensors: the microphones within the hearing device or microphones in the ear canal might detect the gulping sound or might improve the detection rate by excluding other actions, such as own-voice or eating.

[0118] Voice Pickup sensors (VPUs) might be used to improve the detection of gulping and distinguish it from eating.

[0119] The outcome of the several sensors is fed into a detection algorithm. The detection algorithm might consist of three parts, which are processed in a timely sequence, according to the three drinking phases described above. The algorithms might be a (deep) neural network, multivariate analysis of variance (Manova), Support Vector Machines (SVM) or any other machine learning algorithm, pattern recognition algorithm or statistical approach.

[0120] Instead of processing the three phases in a temporal manner and separately, the drinking procedure might be also detected without distinguishing any phases. A deep learning algorithm might find other criteria during a training phase. Since dehydration and drinking detection must not be solved in real time and must not be processed within the hearing device 12, diverse and more complex algorithms and processes are applicable.

[0121] Description of the method based on (deep) neural networks:

[0122] Input: data sets x=(t, x1, x2, x3, . . . ) for each time instance (t), where x1, x2, x3, . . . represent the collected sensor values, as described above.

[0123] Output data: y=(t, fi, d) for the input time instance (t), where (fi) is the determined number and (d) the determined duration of fluid intake in this example.

[0124] Supervised learning: the algorithm is trained with labeled data.

[0125] The learning part can be supervised and trained with a database of sensor data. Here, a suitable database of training data based on the audio signal from a microphone contains recorded swallowing or gulping sound data of a large number of people--in this example 1000 randomly selected persons. Alternatively or additionally, representative sound data of drinking, swallowing, gulping, chewing etc. available in the Internet may be used as training data. The training can also be unsupervised by learning in an environment with more information available, such as using smart water bottles and/or information within a smart home and/or input from the user. Reinforcement learning or "deep reinforcement learning", as a third category of machine learning paradigms beside supervised and unsupervised learning, proposes different algorithms, which may be applied in the present method as well.

[0126] Instead of distinguishing three phases of drinking a priori, the "history" of the drinking process or, respectively, its temporal dynamic behaviour, might be incorporated by applying different machine learning methods. Beside the traditional method of Hidden Markow Model (HMM), recurrent neural networks (RNN) might be applied, e.g.

[0127] Long short-term memory (LSTM)

[0128] Gated recurrent unit (GRU).

[0129] GRU might be more suited to the application due to less computational effort.

[0130] Furthermore, the analysis may be implemented by parallel decision units in the hearing device 12 or in the hearing system 10. That is, instead of using one algorithm, several machine learning (ML) algorithms might be applied in parallel to compute the detection probabilities.

[0131] This may, for example, allow for the following features:

[0132] To detect some drinking phases, one or the other sensor and/or algorithm might be more suitable than the other.

[0133] Weighting parameters (e.g. in the activation functions of the artificial neurons) might be separately introduced and determined for each drinking phase and each decision algorithm.

[0134] An algorithm in sequence to the several decision algorithms, e.g. in an architecture such as a decision tree, might be used to make the final decision.

[0135] As already mentioned, the sensor data might be collected and analyzed in different types of analysis units, for example:

[0136] within the hearing device 12, the hearing system 10 incorporating additional devices such as smartphone 14, smartwatch, wrist band, remote control, remote microphone, hearing aid/device etui, TV-Connector, . . .

[0137] on a server, cloud, smart home device.

[0138] As already mentioned above, following on the above-described "sensor part", the "actuator part" of the method consists of several steps in this example: storing the detected values in a memory, estimation of a dehydration risk, counselling the user, informing third party about drinking behavior of the user. Two of these steps are further described in the following:

[0139] The risk of dehydration is estimated dependent on the detected drinking behavior, e.g. the determined amount of liquid ingested by the user is compared with a certain value "Dmin" describing the necessary minimum amount of drinking. This value can be a fixed number or might be calculated depending on one or more of the following factors:

[0140] on the environmental condition, such as the weather temperature: if the temperature around the user is very high, the value "Dmin" is increased;

[0141] on the activity behavior of the user: if he walks a lot (e.g. as detected by an accelerometer) or performs an activity with high physical effort (the heart beat is increased, e.g. as measured by a corresponding sensor), the value "Dmin" increases

[0142] on medical conditions (e.g. as preset by the user's physician)

[0143] The values "Dmin" might be stored in a table including predetermined values or calculated with a regression equation, e.g. as known from medical studies. Alternatively or in addition, the value "Dmin" can also be adapted by the user or a doctor.

[0144] The above-mentioned step of counseling the user might include several options, such as one or more of the following:

[0145] a smartphone app on the connected user device 14 displays how much the user has drunk and how much the user should have drunk during the day.

[0146] the user gets notified when he shall drink: acoustically, via vibration on devices such as wristband, smartwatch, or via augmented reality. This may, for example, be implemented in that the user's glasses (or smart glasses being a part of the hearing system 10 or connected to it) augment visually a drinking glass or a drinking bottle.

[0147] Moreover, in the present example, an interactive user interface is provided in the hearing system 10 and the steps of analyzing (S20) and/or generating an output (S30) are supplemented by an interaction with the user via this user interface, wherein the user is enabled to input additional information pertaining to his liquid intake.

[0148] For example, such an interface may be configured such as to enable the user to manually enter the data for drinking behavior and/or to correct and/or specify the liquid intake. It might further provide the user with a possibility to add comments about subjective descriptions and ratings about his health state, mental condition, feelings, thirst, appetite, social condition (e.g. alone or with company), cause of fluid intake, intake of medication, etc.

[0149] Thereby, for example, the hearing system 10 might ask the user proactively whether he has taken medication right after the system has detected fluid intake in step S20. The proactive questioning might be during a specific time predefined and stored in the hearing system 10, such as in the morning, in the evening, just before or after taking a meal, which, in turn, might be determined using the analysis algorithm in step S20. This proactive question shall help the user to keep track about the intake of medication.

[0150] The interactive user interface might also help to improve the accuracy of the detection in general or within a learning phase of the analysis algorithm in step S20. It may also serve for a more elaborated therapy.

[0151] The interactive user interface (also denoted as UI) might comprise one or more of the following types of user interface:

[0152] An app on a smartphone (connected user device 14 in FIG. 1), smartwatch or another connected user device, which provides a correction mode showing e.g. the detected gulps right after detection (e.g. the graphical user interface 40 on the display 42 of the connected user device 14).

[0153] An entry mode provided in the hearing device (e.g. via the slider 44 in the graphical user interface 40 on the connected user device 14), configured to enable the user to specify the liquid intake.

[0154] A conversational user interface incorporated in the hearing device 12.

[0155] In some implementations of the above described method, drinking may not be distinguished from a medication intake, i.e., it may not be separately determined whether an action of drinking is related to the drinking of a fluid (such as water) without any accompanying medication intake (such as swallowing a pill) and another action of drinking is carried out for the purpose of a medication intake (e.g., swallowing water to facilitate a swallowing of a pill). Those implementations may be beneficial to provide an estimation of the dehydration risk with a lower effort, e.g. under the assumption that a medication intake will not alter the dehydration risk to a considerable amount. In some other implementations of this method, drinking can be further distinguished from a medication intake, e.g., it may be determined whether the drinking is accompanied by a medication intake or not. Those implementations may be beneficial to provide an estimation of the dehydration risk with an increased accuracy, e.g. in certain cases of a medication which can impact the dehydration risk to a considerable amount. Further, the proactive inquiry to the user by the hearing system 10 whether he has taken medication right after the system has detected fluid intake in step S20 may be carried out depending whether the medication intake has been detected or not, e.g., to remind the user only in those cases to take his medication in which it seems appropriate due to a lack of a detected medication intake.

[0156] In the following, the method will be further illustrated, by way of another example, for estimating a risk of an irregular or omitted medication intake of the user and generating a corresponding output to counsel the user to take his medication more regularly. It may also be applied in a similar manner for other use cases described in detail herein, such as enhancing the user's desire to drink and/or to take medication or enhancing the eating or drinking or medication taking experience of the user.

[0157] The method of estimating a risk of an irregular and/or omitted medication intake may comprise any of the steps S10, S20, and S30 described above in conjunction with estimating a risk of dehydration of the user, wherein, however, drinking is distinguished from medication intake, and those steps are modified according to the following description. During the gulping of the liquid (phase 2), a tilting of the user's head may be determined, e.g., be determining a tilting angle of the head relative to the surface of the earth by a movement sensor and/or an orientation sensor. Drinking (without medication intake) may then be distinguished from a medication intake (which may be accompanied by drinking) by determining whether the tilting angle is in a specific angular range associated with the medication intake (e.g., swallowing of a pill) rather than in another angular range typically occurring during a (regular) drinking not involving a medication intake.

[0158] Further, before the grabbing a glass or bottle and bringing it to the mouth (phase 1), another phase associated with the medication intake may be determined, e.g., grabbing the medication (e.g., grabbing a pill and/or opening a pill dispenser and taking a pill) and bringing it to the mouth. This phase (which may be denoted as "phase 0") may be detected correspondingly to "phase 1" by movement sensors and/or orientation sensors, e.g. in a body worn device (such as wrist band, smartwatch, finger ring, belt, etc.) and/or included in a hearing device, as described above. E.g., "phase 0" can be distinguished from "phase 1" by a different movement type and/or movement pattern. In this way, in cases in which such a "phase 0" occurring before the "phase 1" is determined, a medication intake (which may be accompanied by drinking) can also be distinguished from drinking without medication intake, i.e. from cases in which "phase 1" may be detected without a preceding "phase 0".

[0159] Further, after the medication and/or liquid has been ingested (i.e., after "phase 2" or after "phase 3"), an impact on a physiological property of the user caused by the medication intake and/or drinking may be determined, e.g. a change of a heart rate and/or change of a blood pressure and/or a change a blood analyte level. Depending on an occurrence of such a change of the physiological property and/or depending on an extent to which a change of the physiological property occurs, it may be determined whether the medication has been taken and/or drinking (without the medication intake) may thus be distinguished from a medication intake (which may be accompanied by drinking). To illustrate, a change of blood pressure may be rather identified as a side effect of a medication intake as compared to a drinking of (pure) water and may therefore serve as an indicator of the medication intake. A change of blood pressure, however, may also be caused by certain beverages (e.g., coffee). To distinguish the medication intake from the drinking of such beverages (without medication intake), a certain extent of the blood pressure change may be determined which may then be associated with the medication rather than the drinking of such beverages and/or, in a case in which a decrease of the blood pressure is determined, it may be directly employed as an indicator of the medication intake (e.g., because drinking of those beverages typically only leads to a blood pressure increase, whereas certain types of medication can also cause a blood pressure decrease). Furthermore, additional indicators of the medication intake may be determined (e.g., a presence or absence of an analyte indicative of certain drug component in a body fluid) to increase the medication intake determination accuracy. Detecting this phase of an impact on a physiological property of the user caused by the medication intake and/or drinking (which may be denoted as "phase 4") may be performed by one or more physiological sensors configured to provide physiological signals indicative of a physiological property of the user, e.g. cardiovascular sensors, heart rate sensors, blood pressure sensors, blood analyte sensors, body fluid analyte sensors, etc., as described above. Those physiological sensors may be included in the hearing device and/or another body worn device (e.g., a smartwatch, wristband, finger ring, belt, etc.).

[0160] In order to detect any of the different phases described above, a suitable detection algorithm can be employed analogous to the detection algorithms described above in conjunction with the estimation of a dehydration risk (for instance, by a separate processing of each of the multiple phases, e.g., in a temporal manner, or by a processing without separating the different phases and/or by a separate processing of some phases and non-separate processing of some other phases).

[0161] A risk of an irregular and/or omitted medication intake may then be estimated dependent on the determined medication intake behavior. E.g., the determined amount of medication taken by the user may be compared with a certain value "Mmin" describing the necessary minimum amount of medication, which may be entered by the user or a medical professional and might be stored in a table, e.g., in a memory of the hearing device or another user device. Further, a risk of an excessive medication intake may also be estimated dependent on the determined medication intake behavior. E.g., the determined amount of medication taken by the user may be compared with a certain value "Mmax" describing a maximum amount of medication which should not be exceeded, e.g., to avoid negative medical side effects or an over-medication. The value "Mmax" which may also be entered by the user or a medical professional and stored in a memory. Counseling the user with regard to the medication intake might include several options, as also described above in conjunction with the dehydration risk, e.g., via displaying a required medication intake versus the actual medication intake on a user device 14 and/or by providing user notifications to remind the user about the medication intake which may occur, e.g., during or after a detected drinking activity of the user and/or when a risk associated with an irregular and/or omitted and/or excessive medication intake has been determined.

[0162] While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.

[0163] Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or controller or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

LIST OF REFERENCE SYMBOLS

[0164] 10 hearing system [0165] 12 hearing device [0166] 14 connected user device [0167] 15 part behind the ear [0168] 16 part in the ear [0169] 18 tube [0170] 20 microphone(s) [0171] 22 sound processor [0172] 24 sound output device [0173] 26 processor [0174] 28 knob [0175] 30 memory [0176] 32 transceiver [0177] 34 transceiver [0178] 36 processor [0179] 38 memory [0180] 40 graphical user interface, interactive user interface [0181] 42 display [0182] 44 control element, slider [0183] 46 indicator element [0184] 50 further sensor, in particular a movement and/or orientation sensor, e.g., an accelerometer, and/or a physiological sensor, e.g., a photoplethysmography (PPG) sensor and/or an electrocardiography (ECG) sensor and/or a blood analyte sensor

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed