Discussion sessions I

Chairpersons

Tobias Neher (Scientific Co-Chair) 

Dirk Junius (Industrial Co-Chair) 

Birger Kollmeier (Organizational Co-Chair)

Discussion sessions I

Thursday, June 12th 2025, afternoon

I Hardware and Acoustics

a) Hearable-centered assistive systems

Session chairs: Dirk Junius (WSA), Tanja Schultz. (University Bremen)

Short description: The ear is potentially a favorable location for obtaining human health data. Hearing devices with multi-modal sensors could serve as suitable central data acquisition unit. This session looks at various challenges that need to be tackled to ensure that the gathered information can efficiently be collected and provides benefit to end-users or healthcare providers, e.g.

  • Evaluations of the efficiency and accuracy of sensors and algorithms for data analysis, including cost/benefit ratio,
  • Standardization of data structures and interfaces (e.g. Fitting Software – medical records – health insurance),
  • Data fusion and data privacy,
  • Benefits for healthcare and end-users.

Agenda 

 

II Audiology

b) Role of HCPs and ML in future fitting of hearing aids.

Session chairs: Niels Pontoppidan (Oticon), Vinzenz Schönfelder (Mimi)

Short description: How will machine learning affect the future of hearing aid fitting? How can it benefit the work of the Hearing Health Care Practitioner (HCP), e.g. by giving smarter, data-based guidance during the fitting workflow? Can machine learning also improve self-fitting success for end-users purchasing OTC devices? And finally, what is the potential of AI-based digital assistant Apps used for fine-tuning in every-day life?

Agenda 

 

III Signal Processing and AI

c) ML-based hearing device processing: from targeted signal processing to semantic hearing

Session chairs: Martin McKinney (Starkey), Simon Doclo (University of Oldenburg).

Short description: Machine learning (ML) offers great progress in multimicrophone processing, speaker-specific signal enhancement and other problems in (low-level) signal enhancement and noise abatement. Moreover, large language models allow to semantically interpret the environment, perform a scene classification (e.g. with additional video input) and to eventually address the listeners “hearing wish” to steer signal enhancement. How far have these audiologists´ dreams already come true? How feasible are current ML techniques for use in hearing devices with moderate processing capabilities, transmission bandwidths, and strict latency requirements?

Agenda 

 

Internetkoordinator (Stand: 03.06.2025)  Kurz-URL:Shortlink: https://uole.de/p111320
Zum Seitananfang scrollen Scroll to the top of the page